Test Report: Hyper-V_Windows 19024

                    
                      79b1b42e4c1f52f497f2c052d5e760f5044cd55a:2024-06-05:34765
                    
                

Test fail (14/199)

x
+
TestAddons/parallel/Registry (77.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.4721ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6knk6" [bfea29d5-1c99-42cd-a2d1-ccee4cafda07] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0190892s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vdxsk" [8590355a-b67e-4bf1-8b87-5e9de564093c] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.149795s
addons_test.go:342: (dbg) Run:  kubectl --context addons-369400 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-369400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-369400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.067134s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 ip: (3.0166624s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0604 21:38:32.559782   10176 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-369400 ip"
2024/06/04 21:38:35 [DEBUG] GET http://172.20.139.74:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable registry --alsologtostderr -v=1: (17.4170906s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-369400 -n addons-369400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-369400 -n addons-369400: (14.4953165s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 logs -n 25: (10.6060134s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |                     |
	|         | -p download-only-352000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-352000                                                                     | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| start   | -o=json --download-only                                                                     | download-only-033100 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | -p download-only-033100                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-033100                                                                     | download-only-033100 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-352000                                                                     | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-033100                                                                     | download-only-033100 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-211000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | binary-mirror-211000                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:62318                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-211000                                                                     | binary-mirror-211000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | addons-369400                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | addons-369400                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-369400 --wait=true                                                                | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC | 04 Jun 24 21:38 UTC |
	|         | -p addons-369400                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC | 04 Jun 24 21:38 UTC |
	|         | -p addons-369400                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-369400 addons disable                                                                | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC | 04 Jun 24 21:38 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ip      | addons-369400 ip                                                                            | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC | 04 Jun 24 21:38 UTC |
	| addons  | addons-369400 addons disable                                                                | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC | 04 Jun 24 21:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:38 UTC |                     |
	|         | addons-369400                                                                               |                      |                   |         |                     |                     |
	| ssh     | addons-369400 ssh cat                                                                       | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:39 UTC |                     |
	|         | /opt/local-path-provisioner/pvc-d2e31ec4-d787-4fa8-8e02-97096b762939_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-369400        | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:39 UTC |                     |
	|         | addons-369400                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:30:35
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:30:35.232113    6648 out.go:291] Setting OutFile to fd 712 ...
	I0604 21:30:35.232312    6648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:35.232312    6648 out.go:304] Setting ErrFile to fd 728...
	I0604 21:30:35.232312    6648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:35.258560    6648 out.go:298] Setting JSON to false
	I0604 21:30:35.264402    6648 start.go:129] hostinfo: {"hostname":"minikube6","uptime":83884,"bootTime":1717452750,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 21:30:35.265914    6648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 21:30:35.324264    6648 out.go:177] * [addons-369400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 21:30:35.329028    6648 notify.go:220] Checking for updates...
	I0604 21:30:35.334019    6648 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:30:35.336595    6648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 21:30:35.338738    6648 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 21:30:35.342180    6648 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 21:30:35.344867    6648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:30:35.347966    6648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:30:41.092934    6648 out.go:177] * Using the hyperv driver based on user configuration
	I0604 21:30:41.097006    6648 start.go:297] selected driver: hyperv
	I0604 21:30:41.097006    6648 start.go:901] validating driver "hyperv" against <nil>
	I0604 21:30:41.097006    6648 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 21:30:41.145015    6648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:30:41.146294    6648 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 21:30:41.146371    6648 cni.go:84] Creating CNI manager for ""
	I0604 21:30:41.146371    6648 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:30:41.146447    6648 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 21:30:41.146487    6648 start.go:340] cluster config:
	{Name:addons-369400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-369400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:30:41.146487    6648 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:30:41.151255    6648 out.go:177] * Starting "addons-369400" primary control-plane node in "addons-369400" cluster
	I0604 21:30:41.153513    6648 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:30:41.154694    6648 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 21:30:41.154694    6648 cache.go:56] Caching tarball of preloaded images
	I0604 21:30:41.154903    6648 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 21:30:41.155313    6648 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 21:30:41.155452    6648 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\config.json ...
	I0604 21:30:41.156261    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\config.json: {Name:mk776f0358932d47cbee127db86f5825abecf2b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:30:41.156495    6648 start.go:360] acquireMachinesLock for addons-369400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 21:30:41.157842    6648 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-369400"
	I0604 21:30:41.158055    6648 start.go:93] Provisioning new machine with config: &{Name:addons-369400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-369400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 21:30:41.158055    6648 start.go:125] createHost starting for "" (driver="hyperv")
	I0604 21:30:41.159413    6648 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0604 21:30:41.159413    6648 start.go:159] libmachine.API.Create for "addons-369400" (driver="hyperv")
	I0604 21:30:41.159413    6648 client.go:168] LocalClient.Create starting
	I0604 21:30:41.161429    6648 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 21:30:41.248148    6648 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 21:30:41.746541    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 21:30:43.963833    6648 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 21:30:43.963833    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:43.972355    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 21:30:45.771991    6648 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 21:30:45.771991    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:45.779800    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 21:30:47.386668    6648 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 21:30:47.386668    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:47.386958    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 21:30:51.350479    6648 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 21:30:51.350479    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:51.364973    6648 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 21:30:51.884937    6648 main.go:141] libmachine: Creating SSH key...
	I0604 21:30:52.025076    6648 main.go:141] libmachine: Creating VM...
	I0604 21:30:52.025076    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 21:30:55.026109    6648 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 21:30:55.026109    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:55.037741    6648 main.go:141] libmachine: Using switch "Default Switch"
	I0604 21:30:55.037838    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 21:30:56.915431    6648 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 21:30:56.915431    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:30:56.915431    6648 main.go:141] libmachine: Creating VHD
	I0604 21:30:56.915431    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 21:31:00.940142    6648 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CAD9AF2B-F65A-4BBE-A655-2C7FE262B57A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 21:31:00.951077    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:00.951077    6648 main.go:141] libmachine: Writing magic tar header
	I0604 21:31:00.951192    6648 main.go:141] libmachine: Writing SSH key tar header
	I0604 21:31:00.960774    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 21:31:04.329252    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:04.340055    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:04.340055    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\disk.vhd' -SizeBytes 20000MB
	I0604 21:31:06.949487    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:06.961343    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:06.961343    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-369400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0604 21:31:10.849536    6648 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-369400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 21:31:10.849536    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:10.849536    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-369400 -DynamicMemoryEnabled $false
	I0604 21:31:13.198880    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:13.198880    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:13.211731    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-369400 -Count 2
	I0604 21:31:15.501960    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:15.501960    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:15.514212    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-369400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\boot2docker.iso'
	I0604 21:31:18.240593    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:18.240593    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:18.240593    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-369400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\disk.vhd'
	I0604 21:31:21.050815    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:21.050815    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:21.050815    6648 main.go:141] libmachine: Starting VM...
	I0604 21:31:21.065927    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-369400
	I0604 21:31:24.361628    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:24.373459    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:24.373503    6648 main.go:141] libmachine: Waiting for host to start...
	I0604 21:31:24.373550    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:26.783917    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:26.783917    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:26.793743    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:31:29.455131    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:29.467564    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:30.474233    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:32.810777    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:32.810777    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:32.811718    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:31:35.454373    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:35.454562    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:36.456337    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:38.715000    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:38.715000    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:38.720279    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:31:41.359813    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:41.359813    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:42.361802    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:44.642892    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:44.653427    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:44.653427    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:31:47.320236    6648 main.go:141] libmachine: [stdout =====>] : 
	I0604 21:31:47.320317    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:48.331536    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:50.672685    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:50.672685    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:50.672685    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:31:53.437257    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:31:53.437257    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:53.452164    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:55.682814    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:55.682814    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:55.682814    6648 machine.go:94] provisionDockerMachine start ...
	I0604 21:31:55.694786    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:31:57.991315    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:31:57.991315    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:31:57.991315    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:00.666650    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:00.678263    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:00.684012    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:00.690494    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:00.690494    6648 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 21:32:00.843371    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 21:32:00.843371    6648 buildroot.go:166] provisioning hostname "addons-369400"
	I0604 21:32:00.843371    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:03.154593    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:03.154593    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:03.154840    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:05.837714    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:05.837714    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:05.847995    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:05.848633    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:05.848633    6648 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-369400 && echo "addons-369400" | sudo tee /etc/hostname
	I0604 21:32:06.014012    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-369400
	
	I0604 21:32:06.014012    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:08.295318    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:08.306500    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:08.306626    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:10.974468    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:10.974468    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:10.980266    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:10.981126    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:10.981126    6648 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-369400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-369400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-369400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 21:32:11.139239    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 21:32:11.139239    6648 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 21:32:11.139239    6648 buildroot.go:174] setting up certificates
	I0604 21:32:11.139239    6648 provision.go:84] configureAuth start
	I0604 21:32:11.139776    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:13.395343    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:13.395343    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:13.395343    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:16.079840    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:16.079840    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:16.090965    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:18.329548    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:18.329548    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:18.342044    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:21.038759    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:21.052370    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:21.052515    6648 provision.go:143] copyHostCerts
	I0604 21:32:21.053596    6648 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 21:32:21.055410    6648 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 21:32:21.056836    6648 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 21:32:21.057875    6648 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-369400 san=[127.0.0.1 172.20.139.74 addons-369400 localhost minikube]
	I0604 21:32:21.155303    6648 provision.go:177] copyRemoteCerts
	I0604 21:32:21.175759    6648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 21:32:21.175871    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:23.425328    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:23.425328    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:23.425328    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:26.143756    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:26.155998    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:26.156138    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:32:26.283762    6648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1078515s)
	I0604 21:32:26.284040    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 21:32:26.339936    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 21:32:26.387430    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 21:32:26.430214    6648 provision.go:87] duration metric: took 15.2908565s to configureAuth
	I0604 21:32:26.430214    6648 buildroot.go:189] setting minikube options for container-runtime
	I0604 21:32:26.444897    6648 config.go:182] Loaded profile config "addons-369400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:32:26.444897    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:28.700398    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:28.700677    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:28.700791    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:31.358109    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:31.369597    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:31.378957    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:31.379789    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:31.379789    6648 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 21:32:31.519273    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 21:32:31.519273    6648 buildroot.go:70] root file system type: tmpfs
	I0604 21:32:31.519808    6648 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 21:32:31.519914    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:33.818697    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:33.818697    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:33.818847    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:36.560570    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:36.560570    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:36.566769    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:36.567324    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:36.567470    6648 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 21:32:36.737521    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 21:32:36.737521    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:39.022273    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:39.022273    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:39.022273    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:41.724761    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:41.736569    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:41.742311    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:41.743120    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:41.743120    6648 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 21:32:43.964682    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 21:32:43.964682    6648 machine.go:97] duration metric: took 48.2698392s to provisionDockerMachine
	I0604 21:32:43.964682    6648 client.go:171] duration metric: took 2m2.8043202s to LocalClient.Create
	I0604 21:32:43.964682    6648 start.go:167] duration metric: took 2m2.8043202s to libmachine.API.Create "addons-369400"
	I0604 21:32:43.965223    6648 start.go:293] postStartSetup for "addons-369400" (driver="hyperv")
	I0604 21:32:43.965293    6648 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 21:32:43.977774    6648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 21:32:43.977774    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:46.259890    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:46.259890    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:46.259890    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:48.927353    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:48.939037    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:48.939037    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:32:49.056135    6648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.078322s)
	I0604 21:32:49.070032    6648 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 21:32:49.077965    6648 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 21:32:49.078078    6648 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 21:32:49.078211    6648 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 21:32:49.078744    6648 start.go:296] duration metric: took 5.1134121s for postStartSetup
	I0604 21:32:49.081620    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:51.398702    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:51.410324    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:51.410324    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:54.113251    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:54.113251    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:54.113251    6648 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\config.json ...
	I0604 21:32:54.117029    6648 start.go:128] duration metric: took 2m12.9579483s to createHost
	I0604 21:32:54.117117    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:32:56.383898    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:32:56.396172    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:56.396172    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:32:59.109615    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:32:59.121978    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:32:59.127886    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:32:59.128588    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:32:59.128588    6648 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 21:32:59.275188    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717536779.283918079
	
	I0604 21:32:59.275188    6648 fix.go:216] guest clock: 1717536779.283918079
	I0604 21:32:59.275188    6648 fix.go:229] Guest: 2024-06-04 21:32:59.283918079 +0000 UTC Remote: 2024-06-04 21:32:54.1170294 +0000 UTC m=+139.065299101 (delta=5.166888679s)
	I0604 21:32:59.275834    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:33:01.541312    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:33:01.553012    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:01.553012    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:33:04.251188    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:33:04.251188    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:04.258170    6648 main.go:141] libmachine: Using SSH client type: native
	I0604 21:33:04.258421    6648 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.139.74 22 <nil> <nil>}
	I0604 21:33:04.258421    6648 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717536779
	I0604 21:33:04.412370    6648 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 21:32:59 UTC 2024
	
	I0604 21:33:04.412370    6648 fix.go:236] clock set: Tue Jun  4 21:32:59 UTC 2024
	 (err=<nil>)
	I0604 21:33:04.412370    6648 start.go:83] releasing machines lock for "addons-369400", held for 2m23.2534239s
	I0604 21:33:04.413076    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:33:06.659279    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:33:06.659279    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:06.659279    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:33:09.384739    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:33:09.395737    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:09.400412    6648 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 21:33:09.400622    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:33:09.410326    6648 ssh_runner.go:195] Run: cat /version.json
	I0604 21:33:09.410326    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:33:11.743949    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:33:11.744168    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:11.744233    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:33:11.772255    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:33:11.772255    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:11.772255    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:33:14.633017    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:33:14.633085    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:14.633620    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:33:14.662109    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:33:14.662109    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:33:14.662846    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:33:14.734051    6648 ssh_runner.go:235] Completed: cat /version.json: (5.3236837s)
	I0604 21:33:14.746714    6648 ssh_runner.go:195] Run: systemctl --version
	I0604 21:33:14.832206    6648 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4316951s)
	I0604 21:33:14.845235    6648 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0604 21:33:14.855428    6648 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 21:33:14.867709    6648 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 21:33:14.895688    6648 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 21:33:14.895813    6648 start.go:494] detecting cgroup driver to use...
	I0604 21:33:14.896275    6648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 21:33:14.946600    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 21:33:14.978080    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 21:33:14.999168    6648 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 21:33:15.011173    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 21:33:15.045924    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:33:15.080789    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 21:33:15.115672    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:33:15.153220    6648 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 21:33:15.185780    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 21:33:15.218473    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 21:33:15.253612    6648 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 21:33:15.285763    6648 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 21:33:15.321630    6648 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 21:33:15.360414    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:15.569166    6648 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 21:33:15.607635    6648 start.go:494] detecting cgroup driver to use...
	I0604 21:33:15.621382    6648 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 21:33:15.661378    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 21:33:15.700743    6648 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 21:33:15.745222    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 21:33:15.786915    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 21:33:15.826061    6648 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 21:33:15.896275    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 21:33:15.923936    6648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 21:33:15.977137    6648 ssh_runner.go:195] Run: which cri-dockerd
	I0604 21:33:15.995457    6648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 21:33:16.016522    6648 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 21:33:16.062477    6648 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 21:33:16.297632    6648 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 21:33:16.496782    6648 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 21:33:16.497052    6648 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 21:33:16.544362    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:16.757143    6648 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 21:33:19.335371    6648 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5781271s)
	I0604 21:33:19.348503    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 21:33:19.390929    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 21:33:19.432221    6648 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 21:33:19.664262    6648 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 21:33:19.876558    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:20.102364    6648 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 21:33:20.150799    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 21:33:20.190349    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:20.410594    6648 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 21:33:20.526118    6648 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 21:33:20.539788    6648 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 21:33:20.549677    6648 start.go:562] Will wait 60s for crictl version
	I0604 21:33:20.562442    6648 ssh_runner.go:195] Run: which crictl
	I0604 21:33:20.585358    6648 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 21:33:20.655509    6648 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 21:33:20.667419    6648 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 21:33:20.720312    6648 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 21:33:20.757860    6648 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 21:33:20.757860    6648 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 21:33:20.762740    6648 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 21:33:20.762821    6648 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 21:33:20.762821    6648 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 21:33:20.762821    6648 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 21:33:20.766147    6648 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 21:33:20.766147    6648 ip.go:210] interface addr: 172.20.128.1/20
	I0604 21:33:20.780935    6648 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 21:33:20.787713    6648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 21:33:20.815519    6648 kubeadm.go:877] updating cluster {Name:addons-369400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-369400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.139.74 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 21:33:20.815519    6648 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:33:20.826233    6648 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 21:33:20.852948    6648 docker.go:685] Got preloaded images: 
	I0604 21:33:20.852948    6648 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0604 21:33:20.865599    6648 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 21:33:20.903844    6648 ssh_runner.go:195] Run: which lz4
	I0604 21:33:20.925781    6648 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 21:33:20.933824    6648 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 21:33:20.934001    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0604 21:33:23.036887    6648 docker.go:649] duration metric: took 2.125602s to copy over tarball
	I0604 21:33:23.050758    6648 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 21:33:28.452286    6648 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.401342s)
	I0604 21:33:28.452374    6648 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 21:33:28.523287    6648 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 21:33:28.544797    6648 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0604 21:33:28.591328    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:28.825518    6648 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 21:33:34.544546    6648 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7189843s)
	I0604 21:33:34.556129    6648 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 21:33:34.582719    6648 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 21:33:34.582719    6648 cache_images.go:84] Images are preloaded, skipping loading
	I0604 21:33:34.582719    6648 kubeadm.go:928] updating node { 172.20.139.74 8443 v1.30.1 docker true true} ...
	I0604 21:33:34.583710    6648 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-369400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.139.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-369400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 21:33:34.593707    6648 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 21:33:34.634514    6648 cni.go:84] Creating CNI manager for ""
	I0604 21:33:34.634514    6648 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:33:34.634514    6648 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 21:33:34.634514    6648 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.139.74 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-369400 NodeName:addons-369400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.139.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.139.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 21:33:34.634514    6648 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.139.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-369400"
	  kubeletExtraArgs:
	    node-ip: 172.20.139.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.139.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 21:33:34.647129    6648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 21:33:34.667140    6648 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 21:33:34.679122    6648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0604 21:33:34.701983    6648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0604 21:33:34.738078    6648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 21:33:34.773584    6648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0604 21:33:34.827508    6648 ssh_runner.go:195] Run: grep 172.20.139.74	control-plane.minikube.internal$ /etc/hosts
	I0604 21:33:34.834325    6648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.139.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 21:33:34.871011    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:33:35.099186    6648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:33:35.135916    6648 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400 for IP: 172.20.139.74
	I0604 21:33:35.136001    6648 certs.go:194] generating shared ca certs ...
	I0604 21:33:35.136001    6648 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:35.136258    6648 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 21:33:36.084802    6648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0604 21:33:36.084802    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.085817    6648 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0604 21:33:36.085817    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.086814    6648 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 21:33:36.321570    6648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0604 21:33:36.321570    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.322579    6648 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0604 21:33:36.322579    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.324065    6648 certs.go:256] generating profile certs ...
	I0604 21:33:36.324934    6648 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.key
	I0604 21:33:36.324934    6648 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt with IP's: []
	I0604 21:33:36.598973    6648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt ...
	I0604 21:33:36.598973    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: {Name:mk32280cdaf57ca9fd8db27948d0e0850f1ea058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.600057    6648 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.key ...
	I0604 21:33:36.600057    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.key: {Name:mk94fef3e124dafbb22168e96aa0f5f01a72300e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.600852    6648 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key.63545459
	I0604 21:33:36.601934    6648 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt.63545459 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.139.74]
	I0604 21:33:36.876672    6648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt.63545459 ...
	I0604 21:33:36.876672    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt.63545459: {Name:mk0fe24c79c187e054b5470104c47897030d1726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.877975    6648 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key.63545459 ...
	I0604 21:33:36.877975    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key.63545459: {Name:mk4f81904eced8923813798200e3fbb78b3f2350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:36.879230    6648 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt.63545459 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt
	I0604 21:33:36.892732    6648 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key.63545459 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key
	I0604 21:33:36.893719    6648 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.key
	I0604 21:33:36.894261    6648 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.crt with IP's: []
	I0604 21:33:37.009193    6648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.crt ...
	I0604 21:33:37.010182    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.crt: {Name:mk63ba1f19f5cee81b2c1ab0c925f03a9f805ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:37.010995    6648 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.key ...
	I0604 21:33:37.010995    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.key: {Name:mk96679ecc847c93b24efd6d271a7e0e3811afb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:33:37.032077    6648 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 21:33:37.032718    6648 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 21:33:37.032718    6648 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 21:33:37.033325    6648 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 21:33:37.035679    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 21:33:37.091996    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 21:33:37.141179    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 21:33:37.194173    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 21:33:37.243988    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0604 21:33:37.294696    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0604 21:33:37.348834    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 21:33:37.403505    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 21:33:37.459628    6648 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 21:33:37.520403    6648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 21:33:37.571129    6648 ssh_runner.go:195] Run: openssl version
	I0604 21:33:37.595315    6648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 21:33:37.631931    6648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:33:37.640858    6648 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:33:37.658702    6648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:33:37.684479    6648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 21:33:37.723022    6648 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 21:33:37.731872    6648 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 21:33:37.732542    6648 kubeadm.go:391] StartCluster: {Name:addons-369400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-369400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.139.74 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:33:37.742573    6648 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 21:33:37.786769    6648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 21:33:37.817230    6648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 21:33:37.854144    6648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 21:33:37.877442    6648 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 21:33:37.877442    6648 kubeadm.go:156] found existing configuration files:
	
	I0604 21:33:37.892619    6648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 21:33:37.912353    6648 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 21:33:37.924091    6648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 21:33:37.955141    6648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 21:33:37.975407    6648 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 21:33:37.988179    6648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 21:33:38.018791    6648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 21:33:38.036673    6648 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 21:33:38.049644    6648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 21:33:38.086507    6648 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 21:33:38.104077    6648 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 21:33:38.115071    6648 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 21:33:38.134079    6648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 21:33:38.426111    6648 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 21:33:53.152215    6648 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 21:33:53.152490    6648 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 21:33:53.152613    6648 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 21:33:53.152861    6648 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 21:33:53.152861    6648 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 21:33:53.152861    6648 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 21:33:53.157747    6648 out.go:204]   - Generating certificates and keys ...
	I0604 21:33:53.158039    6648 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 21:33:53.158039    6648 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 21:33:53.158039    6648 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 21:33:53.158039    6648 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 21:33:53.158591    6648 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 21:33:53.158757    6648 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 21:33:53.158864    6648 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 21:33:53.158913    6648 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-369400 localhost] and IPs [172.20.139.74 127.0.0.1 ::1]
	I0604 21:33:53.158913    6648 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 21:33:53.159452    6648 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-369400 localhost] and IPs [172.20.139.74 127.0.0.1 ::1]
	I0604 21:33:53.159558    6648 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 21:33:53.159558    6648 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 21:33:53.159558    6648 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 21:33:53.159558    6648 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 21:33:53.159558    6648 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 21:33:53.160126    6648 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 21:33:53.160735    6648 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 21:33:53.160735    6648 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 21:33:53.160735    6648 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 21:33:53.160735    6648 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 21:33:53.161338    6648 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 21:33:53.163398    6648 out.go:204]   - Booting up control plane ...
	I0604 21:33:53.163946    6648 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 21:33:53.163946    6648 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 21:33:53.164114    6648 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 21:33:53.164255    6648 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 21:33:53.164255    6648 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 21:33:53.164255    6648 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 21:33:53.164905    6648 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 21:33:53.164972    6648 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 21:33:53.164972    6648 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001957576s
	I0604 21:33:53.164972    6648 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 21:33:53.164972    6648 kubeadm.go:309] [api-check] The API server is healthy after 7.502012234s
	I0604 21:33:53.165599    6648 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 21:33:53.165599    6648 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 21:33:53.165599    6648 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 21:33:53.166126    6648 kubeadm.go:309] [mark-control-plane] Marking the node addons-369400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 21:33:53.166126    6648 kubeadm.go:309] [bootstrap-token] Using token: 2lp62g.wz8fmko82t3d5v3l
	I0604 21:33:53.170591    6648 out.go:204]   - Configuring RBAC rules ...
	I0604 21:33:53.170701    6648 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 21:33:53.170918    6648 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 21:33:53.170918    6648 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 21:33:53.171541    6648 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 21:33:53.171635    6648 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 21:33:53.171635    6648 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 21:33:53.171635    6648 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 21:33:53.171635    6648 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 21:33:53.171635    6648 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 21:33:53.171635    6648 kubeadm.go:309] 
	I0604 21:33:53.172213    6648 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 21:33:53.172213    6648 kubeadm.go:309] 
	I0604 21:33:53.172363    6648 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 21:33:53.172363    6648 kubeadm.go:309] 
	I0604 21:33:53.172466    6648 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 21:33:53.172550    6648 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 21:33:53.172808    6648 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 21:33:53.172808    6648 kubeadm.go:309] 
	I0604 21:33:53.172905    6648 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 21:33:53.172905    6648 kubeadm.go:309] 
	I0604 21:33:53.172966    6648 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 21:33:53.172999    6648 kubeadm.go:309] 
	I0604 21:33:53.173116    6648 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 21:33:53.173251    6648 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 21:33:53.173349    6648 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 21:33:53.173453    6648 kubeadm.go:309] 
	I0604 21:33:53.173611    6648 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 21:33:53.173791    6648 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 21:33:53.173857    6648 kubeadm.go:309] 
	I0604 21:33:53.174010    6648 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2lp62g.wz8fmko82t3d5v3l \
	I0604 21:33:53.174442    6648 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 21:33:53.174523    6648 kubeadm.go:309] 	--control-plane 
	I0604 21:33:53.174523    6648 kubeadm.go:309] 
	I0604 21:33:53.174709    6648 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 21:33:53.174709    6648 kubeadm.go:309] 
	I0604 21:33:53.174858    6648 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2lp62g.wz8fmko82t3d5v3l \
	I0604 21:33:53.175166    6648 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 21:33:53.175258    6648 cni.go:84] Creating CNI manager for ""
	I0604 21:33:53.175279    6648 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:33:53.178601    6648 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0604 21:33:53.199795    6648 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0604 21:33:53.224507    6648 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0604 21:33:53.264120    6648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 21:33:53.279535    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:53.281301    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-369400 minikube.k8s.io/updated_at=2024_06_04T21_33_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=addons-369400 minikube.k8s.io/primary=true
	I0604 21:33:53.289274    6648 ops.go:34] apiserver oom_adj: -16
	I0604 21:33:53.436112    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:53.938839    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:54.442090    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:54.946569    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:55.448392    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:55.935617    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:56.442326    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:56.941541    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:57.443743    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:57.941829    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:58.441466    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:58.944378    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:59.448378    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:33:59.947900    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:00.437182    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:00.939982    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:01.439490    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:01.944645    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:02.445135    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:02.940004    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:03.446471    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:03.947841    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:04.440866    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:04.942548    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:05.443076    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:05.946863    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:06.451013    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:06.942909    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:07.440927    6648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 21:34:07.594068    6648 kubeadm.go:1107] duration metric: took 14.3298391s to wait for elevateKubeSystemPrivileges
	W0604 21:34:07.594213    6648 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 21:34:07.594273    6648 kubeadm.go:393] duration metric: took 29.8615935s to StartCluster
	I0604 21:34:07.594329    6648 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:34:07.594383    6648 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:34:07.595706    6648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:34:07.597314    6648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 21:34:07.597314    6648 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.139.74 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 21:34:07.603281    6648 out.go:177] * Verifying Kubernetes components...
	I0604 21:34:07.597314    6648 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0604 21:34:07.597623    6648 config.go:182] Loaded profile config "addons-369400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:34:07.604221    6648 addons.go:69] Setting yakd=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting cloud-spanner=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting registry=true in profile "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon registry=true in "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting storage-provisioner=true in profile "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon storage-provisioner=true in "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting volcano=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting volumesnapshots=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting helm-tiller=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting default-storageclass=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting gcp-auth=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting metrics-server=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting ingress=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting inspektor-gadget=true in profile "addons-369400"
	I0604 21:34:07.604221    6648 addons.go:69] Setting ingress-dns=true in profile "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon cloud-spanner=true in "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon volumesnapshots=true in "addons-369400"
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon metrics-server=true in "addons-369400"
	I0604 21:34:07.608284    6648 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-369400"
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon inspektor-gadget=true in "addons-369400"
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon ingress=true in "addons-369400"
	I0604 21:34:07.609356    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.609356    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.609356    6648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-369400"
	I0604 21:34:07.609356    6648 addons.go:234] Setting addon helm-tiller=true in "addons-369400"
	I0604 21:34:07.609356    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.610228    6648 mustload.go:65] Loading cluster: addons-369400
	I0604 21:34:07.610228    6648 addons.go:234] Setting addon ingress-dns=true in "addons-369400"
	I0604 21:34:07.610228    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon yakd=true in "addons-369400"
	I0604 21:34:07.610228    6648 config.go:182] Loaded profile config "addons-369400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.610228    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.611216    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon volcano=true in "addons-369400"
	I0604 21:34:07.611216    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.608284    6648 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-369400"
	I0604 21:34:07.611216    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.614238    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.608284    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:07.617218    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.617218    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.618217    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.618217    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.618217    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.619223    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.621220    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.621220    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.624847    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.624847    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.626326    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.626326    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.626842    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.627209    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:07.641362    6648 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:34:09.082235    6648 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.4849092s)
	I0604 21:34:09.082685    6648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 21:34:09.082789    6648 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.441416s)
	I0604 21:34:09.443197    6648 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:34:10.910442    6648 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.8277434s)
	I0604 21:34:10.910442    6648 start.go:946] {"host.minikube.internal": 172.20.128.1} host record injected into CoreDNS's ConfigMap
	I0604 21:34:10.917429    6648 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.4742213s)
	I0604 21:34:10.920765    6648 node_ready.go:35] waiting up to 6m0s for node "addons-369400" to be "Ready" ...
	I0604 21:34:11.286125    6648 node_ready.go:49] node "addons-369400" has status "Ready":"True"
	I0604 21:34:11.286125    6648 node_ready.go:38] duration metric: took 365.3568ms for node "addons-369400" to be "Ready" ...
	I0604 21:34:11.286125    6648 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:34:11.841131    6648 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:12.207525    6648 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-369400" context rescaled to 1 replicas
	I0604 21:34:13.845516    6648 pod_ready.go:102] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"False"
	I0604 21:34:14.823183    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.823183    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.829174    6648 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0604 21:34:14.845970    6648 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0604 21:34:14.845970    6648 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0604 21:34:14.837317    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.845970    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:14.845970    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.843685    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.850621    6648 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0604 21:34:14.847854    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.849312    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.858407    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.864420    6648 addons.go:234] Setting addon default-storageclass=true in "addons-369400"
	I0604 21:34:14.864420    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:14.881417    6648 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0604 21:34:14.866948    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:14.869422    6648 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-369400"
	I0604 21:34:14.881417    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:14.887441    6648 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0604 21:34:14.888410    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:14.905297    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.905297    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.905297    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:14.920357    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:14.920357    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:14.925346    6648 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0604 21:34:14.946875    6648 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0604 21:34:14.946875    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0604 21:34:14.947004    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:14.997877    6648 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0604 21:34:14.997877    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0604 21:34:14.997877    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.203032    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.203032    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.213597    6648 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 21:34:15.216899    6648 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:34:15.216953    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 21:34:15.216953    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.437949    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.437949    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.443888    6648 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0604 21:34:15.448595    6648 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0604 21:34:15.448595    6648 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0604 21:34:15.448595    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.446533    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.449586    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.457655    6648 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0604 21:34:15.462964    6648 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0604 21:34:15.463564    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0604 21:34:15.463686    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.471964    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.471964    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.479526    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0604 21:34:15.475958    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.475958    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.492058    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.496230    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0604 21:34:15.492058    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.515664    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0604 21:34:15.504227    6648 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0604 21:34:15.518324    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.528506    6648 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0604 21:34:15.528506    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.532528    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0604 21:34:15.543615    6648 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0604 21:34:15.543615    6648 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0604 21:34:15.538513    6648 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0604 21:34:15.545070    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0604 21:34:15.545070    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.549925    6648 out.go:177]   - Using image docker.io/registry:2.8.3
	I0604 21:34:15.543615    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.567071    6648 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0604 21:34:15.556469    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0604 21:34:15.574068    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0604 21:34:15.581079    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.581079    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0604 21:34:15.581079    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.584135    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0604 21:34:15.577648    6648 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0604 21:34:15.589248    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0604 21:34:15.594671    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0604 21:34:15.594731    6648 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0604 21:34:15.594731    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.594915    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0604 21:34:15.603014    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.605675    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0604 21:34:15.605675    6648 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0604 21:34:15.605675    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:15.652659    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.652659    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.692831    6648 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0604 21:34:15.804426    6648 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:34:15.806370    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:15.967490    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:15.970777    6648 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:34:16.013490    6648 pod_ready.go:102] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"False"
	I0604 21:34:16.097492    6648 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0604 21:34:16.283023    6648 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0604 21:34:16.283023    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0604 21:34:16.283023    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:16.244088    6648 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0604 21:34:16.286675    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0604 21:34:16.286675    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:18.589367    6648 pod_ready.go:102] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"False"
	I0604 21:34:20.844605    6648 pod_ready.go:102] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"False"
	I0604 21:34:21.617786    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:21.617786    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:21.617786    6648 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 21:34:21.617786    6648 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 21:34:21.617786    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:21.704261    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:21.705026    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:21.705195    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:21.901542    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:21.901542    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:21.911538    6648 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0604 21:34:21.914522    6648 out.go:177]   - Using image docker.io/busybox:stable
	I0604 21:34:21.909536    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:21.922674    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:21.922674    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:21.925683    6648 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0604 21:34:21.925683    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0604 21:34:21.925683    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:21.967894    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:21.967894    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:21.967894    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.292305    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.292305    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.292305    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.539838    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.539838    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.539838    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.557430    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.557430    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.557430    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.563423    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.563423    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.563423    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.682372    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.682372    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.682372    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:22.709513    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:22.711494    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:22.712016    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:23.184905    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:23.184905    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:23.184905    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:23.281658    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:23.281658    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:23.282631    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:23.502105    6648 pod_ready.go:102] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"False"
	I0604 21:34:23.538035    6648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0604 21:34:23.538035    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:24.156201    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:24.157013    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:24.157013    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:24.535339    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:24.535339    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:24.535339    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:24.993033    6648 pod_ready.go:92] pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:24.993498    6648 pod_ready.go:81] duration metric: took 13.1522668s for pod "coredns-7db6d8ff4d-h99w6" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:24.993498    6648 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wdkqd" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.030463    6648 pod_ready.go:92] pod "coredns-7db6d8ff4d-wdkqd" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.030463    6648 pod_ready.go:81] duration metric: took 1.0369563s for pod "coredns-7db6d8ff4d-wdkqd" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.030463    6648 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.313463    6648 pod_ready.go:92] pod "etcd-addons-369400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.313463    6648 pod_ready.go:81] duration metric: took 282.9984ms for pod "etcd-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.313463    6648 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.390455    6648 pod_ready.go:92] pod "kube-apiserver-addons-369400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.390455    6648 pod_ready.go:81] duration metric: took 76.9914ms for pod "kube-apiserver-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.390455    6648 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.485885    6648 pod_ready.go:92] pod "kube-controller-manager-addons-369400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.485885    6648 pod_ready.go:81] duration metric: took 95.429ms for pod "kube-controller-manager-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.485885    6648 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-82dw9" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.616989    6648 pod_ready.go:92] pod "kube-proxy-82dw9" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.616989    6648 pod_ready.go:81] duration metric: took 131.1031ms for pod "kube-proxy-82dw9" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.616989    6648 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.717800    6648 pod_ready.go:92] pod "kube-scheduler-addons-369400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:34:26.717800    6648 pod_ready.go:81] duration metric: took 100.8096ms for pod "kube-scheduler-addons-369400" in "kube-system" namespace to be "Ready" ...
	I0604 21:34:26.717800    6648 pod_ready.go:38] duration metric: took 15.431556s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:34:26.717800    6648 api_server.go:52] waiting for apiserver process to appear ...
	I0604 21:34:26.777821    6648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:34:26.975804    6648 api_server.go:72] duration metric: took 19.378341s to wait for apiserver process to appear ...
	I0604 21:34:26.975804    6648 api_server.go:88] waiting for apiserver healthz status ...
	I0604 21:34:26.975804    6648 api_server.go:253] Checking apiserver healthz at https://172.20.139.74:8443/healthz ...
	I0604 21:34:27.117701    6648 api_server.go:279] https://172.20.139.74:8443/healthz returned 200:
	ok
	I0604 21:34:27.157712    6648 api_server.go:141] control plane version: v1.30.1
	I0604 21:34:27.157712    6648 api_server.go:131] duration metric: took 181.9068ms to wait for apiserver health ...
	I0604 21:34:27.158717    6648 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 21:34:27.220235    6648 system_pods.go:59] 7 kube-system pods found
	I0604 21:34:27.220235    6648 system_pods.go:61] "coredns-7db6d8ff4d-h99w6" [f795ef80-3675-46e0-9aa5-a37f1a9a63cb] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "coredns-7db6d8ff4d-wdkqd" [e6195072-a28e-4db0-8a2a-f7bcc2d8dea7] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "etcd-addons-369400" [c61bc94b-28d4-40a4-a265-75545edfdb24] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "kube-apiserver-addons-369400" [fbf3783a-7521-49ab-92df-fa24485af7a4] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "kube-controller-manager-addons-369400" [72078d5a-4f58-44d7-bd97-31668368fe3e] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "kube-proxy-82dw9" [601749aa-038e-4879-aae5-a3acce6de871] Running
	I0604 21:34:27.220235    6648 system_pods.go:61] "kube-scheduler-addons-369400" [d61a0055-d483-4269-b775-e4e3504209f6] Running
	I0604 21:34:27.220235    6648 system_pods.go:74] duration metric: took 61.5178ms to wait for pod list to return data ...
	I0604 21:34:27.220235    6648 default_sa.go:34] waiting for default service account to be created ...
	I0604 21:34:27.419018    6648 default_sa.go:45] found service account: "default"
	I0604 21:34:27.419018    6648 default_sa.go:55] duration metric: took 198.7817ms for default service account to be created ...
	I0604 21:34:27.419018    6648 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 21:34:27.478892    6648 system_pods.go:86] 7 kube-system pods found
	I0604 21:34:27.478892    6648 system_pods.go:89] "coredns-7db6d8ff4d-h99w6" [f795ef80-3675-46e0-9aa5-a37f1a9a63cb] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "coredns-7db6d8ff4d-wdkqd" [e6195072-a28e-4db0-8a2a-f7bcc2d8dea7] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "etcd-addons-369400" [c61bc94b-28d4-40a4-a265-75545edfdb24] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "kube-apiserver-addons-369400" [fbf3783a-7521-49ab-92df-fa24485af7a4] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "kube-controller-manager-addons-369400" [72078d5a-4f58-44d7-bd97-31668368fe3e] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "kube-proxy-82dw9" [601749aa-038e-4879-aae5-a3acce6de871] Running
	I0604 21:34:27.478892    6648 system_pods.go:89] "kube-scheduler-addons-369400" [d61a0055-d483-4269-b775-e4e3504209f6] Running
	I0604 21:34:27.478892    6648 system_pods.go:126] duration metric: took 59.8731ms to wait for k8s-apps to be running ...
	I0604 21:34:27.478892    6648 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 21:34:27.500893    6648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:34:27.598091    6648 system_svc.go:56] duration metric: took 119.198ms WaitForService to wait for kubelet
	I0604 21:34:27.598091    6648 kubeadm.go:576] duration metric: took 20.0006229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 21:34:27.598091    6648 node_conditions.go:102] verifying NodePressure condition ...
	I0604 21:34:27.608799    6648 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 21:34:27.608799    6648 node_conditions.go:123] node cpu capacity is 2
	I0604 21:34:27.608799    6648 node_conditions.go:105] duration metric: took 10.7082ms to run NodePressure ...
	I0604 21:34:27.608799    6648 start.go:240] waiting for startup goroutines ...
	I0604 21:34:28.836407    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:28.836407    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:28.837399    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:28.843391    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:28.843391    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:28.843391    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:30.005790    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.005790    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.005790    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.113801    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.113801    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.115495    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.239578    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.240832    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.241836    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.493656    6648 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0604 21:34:30.493974    6648 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0604 21:34:30.505653    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.505749    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.506478    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.602996    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.602996    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.603996    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.719411    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.719726    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.720832    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:30.767803    6648 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0604 21:34:30.767803    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0604 21:34:30.804571    6648 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0604 21:34:30.804571    6648 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0604 21:34:30.824846    6648 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0604 21:34:30.824846    6648 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0604 21:34:30.885163    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:30.885248    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:30.886309    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.004000    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.004377    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.013146    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.053940    6648 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0604 21:34:31.053940    6648 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0604 21:34:31.090567    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.090567    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.091568    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.142303    6648 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0604 21:34:31.142303    6648 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0604 21:34:31.165122    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.165122    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.166102    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.194108    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0604 21:34:31.270138    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.270138    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.270724    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.412301    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0604 21:34:31.412301    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0604 21:34:31.483127    6648 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0604 21:34:31.483127    6648 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0604 21:34:31.488805    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0604 21:34:31.507481    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:31.507481    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.507481    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:31.541633    6648 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0604 21:34:31.541633    6648 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0604 21:34:31.550945    6648 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0604 21:34:31.550945    6648 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0604 21:34:31.608487    6648 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0604 21:34:31.608487    6648 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0604 21:34:31.618113    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0604 21:34:31.618181    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0604 21:34:31.755559    6648 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0604 21:34:31.755648    6648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0604 21:34:31.773654    6648 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0604 21:34:31.773654    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0604 21:34:31.897605    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0604 21:34:31.911175    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.911175    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.912509    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.920319    6648 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0604 21:34:31.920319    6648 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0604 21:34:31.974082    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:31.974169    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:31.975186    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:31.981659    6648 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0604 21:34:31.981771    6648 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0604 21:34:31.995046    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0604 21:34:31.995133    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0604 21:34:32.018324    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0604 21:34:32.025332    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0604 21:34:32.043326    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0604 21:34:32.077321    6648 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0604 21:34:32.077321    6648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0604 21:34:32.090344    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0604 21:34:32.192880    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0604 21:34:32.192975    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0604 21:34:32.231708    6648 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0604 21:34:32.231708    6648 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0604 21:34:32.263806    6648 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0604 21:34:32.263919    6648 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0604 21:34:32.434028    6648 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0604 21:34:32.434028    6648 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0604 21:34:32.479765    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0604 21:34:32.480027    6648 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0604 21:34:32.485611    6648 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0604 21:34:32.485611    6648 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0604 21:34:32.602038    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0604 21:34:32.649038    6648 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0604 21:34:32.649038    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0604 21:34:32.748727    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0604 21:34:32.748727    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0604 21:34:32.756720    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:34:32.771550    6648 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0604 21:34:32.771648    6648 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0604 21:34:32.783922    6648 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0604 21:34:32.783922    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0604 21:34:32.927641    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:32.927641    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:32.928641    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:32.976646    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:32.976646    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:32.977645    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:32.981664    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0604 21:34:33.083962    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0604 21:34:33.090960    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0604 21:34:33.090960    6648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0604 21:34:33.113438    6648 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:34:33.113438    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0604 21:34:33.472902    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0604 21:34:33.473010    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0604 21:34:33.553556    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:34:33.705807    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 21:34:33.738452    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0604 21:34:33.738516    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0604 21:34:33.888665    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0604 21:34:34.180813    6648 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0604 21:34:34.180897    6648 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0604 21:34:34.481164    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0604 21:34:34.580790    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:34.581825    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:34.582067    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:36.100717    6648 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0604 21:34:36.291480    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.8026379s)
	I0604 21:34:36.291480    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.0973325s)
	I0604 21:34:37.195874    6648 addons.go:234] Setting addon gcp-auth=true in "addons-369400"
	I0604 21:34:37.195874    6648 host.go:66] Checking if "addons-369400" exists ...
	I0604 21:34:37.197297    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:38.324979    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.3066062s)
	I0604 21:34:38.324979    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.4273248s)
	I0604 21:34:38.325974    6648 addons.go:475] Verifying addon metrics-server=true in "addons-369400"
	I0604 21:34:38.324979    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.2995982s)
	I0604 21:34:38.325974    6648 addons.go:475] Verifying addon registry=true in "addons-369400"
	I0604 21:34:38.329020    6648 out.go:177] * Verifying registry addon...
	I0604 21:34:38.334975    6648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0604 21:34:38.411680    6648 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0604 21:34:38.411715    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:38.850562    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:39.354376    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:39.743591    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:39.743591    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:39.759220    6648 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0604 21:34:39.759220    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-369400 ).state
	I0604 21:34:39.855462    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:40.351877    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:40.843689    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:41.349625    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:41.955122    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:42.326372    6648 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:34:42.327379    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:42.327451    6648 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-369400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:34:42.371013    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:42.858042    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:43.377740    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:43.888667    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:44.395528    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:44.845120    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:45.354386    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:45.395139    6648 main.go:141] libmachine: [stdout =====>] : 172.20.139.74
	
	I0604 21:34:45.395139    6648 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:34:45.396129    6648 sshutil.go:53] new ssh client: &{IP:172.20.139.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-369400\id_rsa Username:docker}
	I0604 21:34:45.871788    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:46.344830    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:46.886395    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:47.514512    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:47.962019    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:48.416604    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:48.853921    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:49.477319    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:49.867536    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:50.015241    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (17.9717757s)
	I0604 21:34:50.015515    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (17.925033s)
	I0604 21:34:50.015551    6648 addons.go:475] Verifying addon ingress=true in "addons-369400"
	I0604 21:34:50.015593    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (17.413421s)
	I0604 21:34:50.024238    6648 out.go:177] * Verifying ingress addon...
	I0604 21:34:50.015593    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (17.2587403s)
	I0604 21:34:50.015851    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (17.0337982s)
	I0604 21:34:50.015970    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (16.9317582s)
	I0604 21:34:50.016025    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (16.4623418s)
	I0604 21:34:50.016025    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (16.3100924s)
	I0604 21:34:50.016025    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (16.1272358s)
	W0604 21:34:50.031401    6648 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0604 21:34:50.036257    6648 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-369400 service yakd-dashboard -n yakd-dashboard
	
	I0604 21:34:50.031401    6648 retry.go:31] will retry after 329.556325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0604 21:34:50.033327    6648 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0604 21:34:50.047684    6648 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0604 21:34:50.047732    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0604 21:34:50.103716    6648 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0604 21:34:50.359910    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:50.385270    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0604 21:34:50.589644    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:50.798288    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (16.3169974s)
	I0604 21:34:50.798288    6648 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (11.0389823s)
	I0604 21:34:50.798288    6648 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-369400"
	I0604 21:34:50.803049    6648 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0604 21:34:50.806380    6648 out.go:177] * Verifying csi-hostpath-driver addon...
	I0604 21:34:50.811970    6648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0604 21:34:50.819613    6648 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0604 21:34:50.822451    6648 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0604 21:34:50.822451    6648 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0604 21:34:50.906133    6648 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0604 21:34:50.906236    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:50.942674    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:51.062644    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:51.140824    6648 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0604 21:34:51.140824    6648 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0604 21:34:51.275052    6648 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0604 21:34:51.275113    6648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0604 21:34:51.333635    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:51.344189    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:51.381632    6648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0604 21:34:51.548631    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:51.837201    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:51.842034    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:52.060221    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:52.330329    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:52.344039    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:52.557381    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:52.839588    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:52.843511    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:53.061169    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:53.355820    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:53.369750    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:53.559648    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:53.827334    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.442038s)
	I0604 21:34:53.844685    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:53.861485    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:54.023613    6648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.6408967s)
	I0604 21:34:54.032600    6648 addons.go:475] Verifying addon gcp-auth=true in "addons-369400"
	I0604 21:34:54.037272    6648 out.go:177] * Verifying gcp-auth addon...
	I0604 21:34:54.043369    6648 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0604 21:34:54.051629    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:54.062018    6648 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0604 21:34:54.343067    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:54.348539    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:54.558234    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:54.846494    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:54.846494    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:55.047706    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:55.338560    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:55.347654    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:55.806312    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:55.837436    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:55.844456    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:56.067426    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:56.353053    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:56.373869    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:56.547303    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:56.833732    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:56.848039    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:57.065057    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:57.360599    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:57.376262    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:57.561006    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:57.834699    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:57.844542    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:58.054022    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:58.342061    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:58.352132    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:58.560065    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:58.844142    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:58.851612    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:59.057272    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:59.330075    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:59.342686    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:34:59.553033    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:34:59.841604    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:34:59.848702    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:00.055871    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:00.328698    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:00.342309    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:00.554200    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:00.829693    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:00.842560    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:01.059328    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:01.331905    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:01.345818    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:01.558081    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:01.835205    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:01.842060    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:02.060339    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:02.354507    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:02.368355    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:02.547687    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:02.838033    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:02.843968    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:03.093778    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:03.486478    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:03.487271    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:03.556487    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:03.830592    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:03.842073    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:04.059092    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:04.342353    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:04.350198    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:04.546303    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:04.834715    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:04.850214    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:05.057741    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:05.331350    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:05.342955    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:05.549837    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:05.838639    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:05.848928    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:06.059874    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:06.329972    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:06.343777    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:06.547457    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:06.839031    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:06.845031    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:07.066259    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:07.335434    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:07.346097    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:07.551747    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:07.842627    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:07.844617    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:08.191641    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:08.334660    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:08.348359    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:08.547416    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:08.833711    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:08.848121    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:09.053839    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:09.343051    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:09.349302    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:09.546231    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:09.836297    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:09.846093    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:10.062979    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:10.343728    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:10.348258    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:10.561512    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:10.832776    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:10.843443    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:11.061458    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:11.333003    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:11.352994    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:11.641525    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:11.843341    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:11.852493    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:12.059066    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:12.343676    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:12.348398    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:12.546698    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:12.837298    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:12.843677    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:13.055584    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:13.341762    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:13.346239    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:13.559595    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:13.829976    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:13.846988    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:14.051293    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:14.651631    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:14.652221    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:14.660086    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:14.952470    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:14.953518    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:15.344537    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:15.350455    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:15.355339    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:15.559738    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:15.831374    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:15.846548    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:16.053414    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:16.342811    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:16.348992    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:16.560799    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:16.837887    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:16.842677    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:17.051889    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:17.347487    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:17.350498    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:17.549472    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:17.836836    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:17.848053    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:18.057898    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:18.329897    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:18.343485    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:18.549638    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:18.836467    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:18.841553    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:19.060990    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:19.336025    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:19.344383    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:19.552553    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:19.838533    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:19.843116    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:20.066160    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:20.346703    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:20.352710    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:20.561710    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:20.834395    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:20.850573    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:21.053523    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:21.339894    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:21.344071    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:21.557304    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:21.830742    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:21.847057    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:22.050030    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:22.341243    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:22.346965    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:22.556455    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:22.843104    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:22.848696    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:23.060704    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:23.330232    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:23.345727    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:23.548931    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:23.832992    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:23.845745    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:24.056318    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:24.342668    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:24.348944    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:24.561021    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:24.832613    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:24.845998    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:25.057510    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:25.339615    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:25.343009    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:25.561110    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:25.832958    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:25.876503    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:26.049882    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:26.342057    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:26.347296    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:26.551196    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:26.839071    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:26.844434    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:27.067532    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:27.368528    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:27.398687    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:27.595069    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:27.846185    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:27.861903    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:28.052863    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:28.345540    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:28.353980    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:28.574234    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:28.839909    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:28.844122    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:29.061213    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:29.333824    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:29.354809    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:29.552373    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:29.838633    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:29.845934    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:30.060511    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:30.344904    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:30.350189    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:30.548942    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:30.839951    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:30.845945    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:31.061046    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:31.334044    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:31.348457    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:31.553304    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:31.840895    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:31.846584    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:32.058865    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:32.331888    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:32.345302    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:32.553874    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:32.840625    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:32.848834    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:33.059862    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:33.333474    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:33.346365    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:33.556609    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:33.842540    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:33.846730    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:34.050272    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:34.336642    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:34.347104    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:34.554817    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:34.849182    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:34.862591    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:35.062299    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:35.333394    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:35.344149    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:35.552726    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:35.839332    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:35.844776    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:36.057923    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:36.331335    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:36.342150    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:36.550411    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:36.835858    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:36.842899    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:37.056945    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:37.342119    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:37.350149    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:37.562898    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:37.834718    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:37.849528    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:38.054826    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:38.355514    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:38.361395    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:38.561303    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:38.829978    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:38.846668    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:39.051723    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:39.336872    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:39.342669    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:39.558686    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:39.832041    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:39.847991    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:40.054403    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:40.339087    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:40.343185    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:40.558661    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:40.831582    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:40.849344    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:41.048864    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:41.335565    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:41.341577    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:41.553237    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:41.831112    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:41.844832    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:42.050744    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:42.340159    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:42.344899    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:42.556521    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:42.830303    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:42.844932    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:43.049144    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:43.337596    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:43.342977    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:43.558952    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:43.830017    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:43.844052    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:44.049391    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:44.340135    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:44.344574    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:44.575528    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:44.834398    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:44.844217    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:45.052931    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:45.336787    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:45.342768    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:45.559309    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:45.846428    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:45.862179    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:46.047613    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:46.334742    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:46.348189    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:46.553749    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:46.840063    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:46.845505    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:47.236460    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:47.337641    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:47.341627    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:47.552994    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:47.837004    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:47.843405    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:48.047708    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:48.335863    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:48.343546    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:48.547401    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:48.891975    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:48.896676    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:49.096259    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:49.345088    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:49.357759    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:49.568750    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:49.871803    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:49.882474    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:50.054995    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:50.350168    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:50.358283    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:50.565271    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:50.842092    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:50.847836    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:51.047202    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:51.329148    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:51.347177    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:51.554833    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:51.836223    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:51.843464    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:52.056480    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:52.344660    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:52.347649    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:52.550714    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:52.837951    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:52.844456    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:53.057921    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:53.331157    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:53.345293    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:53.553728    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:53.834038    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:53.847315    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:54.058374    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:54.340916    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:54.345689    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:54.562283    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:54.835569    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:54.841513    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:55.058248    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:55.342902    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:55.348542    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:55.547827    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:55.835403    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:55.847161    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:56.055031    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:56.345448    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:56.351007    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:56.549015    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:56.836206    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:56.847172    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:57.055767    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:57.344615    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:57.348738    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:57.547894    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:57.837231    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:57.843023    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:58.059399    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:58.330587    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:58.345937    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:58.549044    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:58.836259    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:58.843585    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:59.057352    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:59.343576    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:59.348413    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:35:59.561762    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:35:59.839471    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:35:59.846981    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:00.058875    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:00.329451    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:00.341815    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:00.550703    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:00.837616    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:00.850207    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:01.052946    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:01.663240    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:01.669678    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:01.670161    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:01.845736    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:01.988768    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:02.053513    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:02.339011    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:02.344728    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:02.554585    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:02.843721    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:02.848701    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:03.087950    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:03.350062    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:03.359240    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:03.547418    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:03.839891    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:03.845824    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:04.054672    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:04.337184    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:04.345531    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:04.553553    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:04.847083    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:04.847880    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:05.062516    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:05.328325    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:05.347329    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:05.549283    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:05.836980    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:05.844172    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:06.056787    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:06.331603    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:06.344649    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:06.564314    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:06.833665    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:06.846636    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0604 21:36:07.056762    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:07.337797    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:07.346439    6648 kapi.go:107] duration metric: took 1m29.0107669s to wait for kubernetes.io/minikube-addons=registry ...
	I0604 21:36:07.561096    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:07.831457    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:08.051915    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:08.337564    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:08.560963    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:08.831991    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:09.054149    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:09.342845    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:09.566471    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:09.835242    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:10.055344    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:10.339076    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:10.556670    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:10.830415    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:11.049381    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:11.335665    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:11.558057    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:11.842835    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:12.051222    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:12.336452    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:12.554053    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:12.841475    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:13.064157    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:13.449062    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:13.562545    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:13.832700    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:14.051755    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:14.343704    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:14.562047    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:14.834634    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:15.058189    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:15.345013    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:15.797126    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:15.831654    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:16.093319    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:16.418504    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:16.560987    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:16.831175    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:17.052450    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:17.344636    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:17.579536    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:17.835404    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:18.051579    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:18.359454    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:18.561496    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:18.832022    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:19.052492    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:19.336765    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:19.558773    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:19.841110    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:20.072504    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:20.331519    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:20.553066    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:20.838880    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:21.060514    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:21.332397    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:21.551528    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:21.836755    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:22.061275    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:22.344108    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:22.564162    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:22.832811    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:23.053886    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:23.343248    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:23.546401    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:23.840341    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:24.054908    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:24.339557    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:24.560647    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:24.831515    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:25.052583    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:25.340906    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:25.562702    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:26.255873    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:26.257681    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:26.338290    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:26.604863    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:26.841082    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:27.058957    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:27.411068    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:27.559893    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:27.846609    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:28.063149    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:28.333023    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:28.554497    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:28.841361    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:29.064924    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:29.330925    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:29.559939    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:29.836844    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:30.055981    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:30.340095    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:30.558453    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:30.853183    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:31.050263    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:31.337175    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:31.557880    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:31.829926    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:32.052006    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:32.333668    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:32.554015    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:32.839995    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:33.061269    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:33.333457    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:33.556528    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:34.061791    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:34.063656    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:34.349343    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:34.556360    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:34.838661    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:35.058073    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:35.345624    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:35.551333    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:35.845775    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:36.059185    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:36.343291    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:36.562309    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:36.831146    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:37.053012    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:37.342738    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:37.568788    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:37.836569    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:38.053645    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:38.338300    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:38.558206    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:38.832261    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:39.057454    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:39.341889    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:39.561431    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:39.832983    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:40.307008    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:40.339129    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:40.552688    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:40.838229    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:41.060251    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:41.345510    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:41.548259    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:41.838552    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:42.057254    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:42.342163    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:42.561407    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:42.831441    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:43.051258    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:43.342075    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:43.563947    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:43.840659    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:44.062083    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:44.332758    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:44.554527    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:44.839129    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:45.065341    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:45.333580    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:45.549817    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:45.836588    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:46.054895    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:46.342753    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:46.563807    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:47.166506    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:47.168537    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:47.446720    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:47.557867    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:47.838599    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:48.057946    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:48.342684    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:48.556744    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:48.841268    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:49.060544    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:49.342841    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:49.560922    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:49.832503    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:50.053669    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:50.340039    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:50.561262    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:50.834385    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:51.054912    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:51.346348    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:51.561233    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:51.915912    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:52.053711    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:52.345839    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:52.591460    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:52.839479    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:53.051972    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:53.353783    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:53.564048    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:53.836752    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:54.057092    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:54.342402    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:54.550221    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:54.836043    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:55.056415    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:55.345437    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:55.560762    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:55.841483    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:56.065264    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:56.404696    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:56.571315    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:56.835430    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:57.057951    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:57.345339    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:57.564397    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:57.835188    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:58.051302    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:58.338291    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:58.561404    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:58.835901    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:59.392378    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:36:59.394527    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:59.557197    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:36:59.836014    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:00.054765    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:00.335508    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:00.559806    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:00.842638    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:01.059186    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:01.331204    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:01.549118    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:01.838901    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:02.058730    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:02.331279    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:02.707894    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:02.850906    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:03.061606    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:03.333792    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:03.554091    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:03.871429    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:04.057652    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:04.334919    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:04.560969    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:04.830180    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:05.093805    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:05.335760    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:05.555749    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:05.844017    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:06.070932    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:06.340921    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:06.565899    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:06.843214    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:07.064277    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:07.333124    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:07.552317    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:07.843099    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:08.062607    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:08.333710    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:08.556060    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:08.848043    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:09.060515    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:09.332246    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:09.556136    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:09.849218    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:10.050731    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:10.335280    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:10.555838    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:10.850313    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:11.195659    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:11.512921    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:11.556628    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:11.951177    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:12.214234    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:12.333800    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:12.554261    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:12.839975    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:13.058456    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:13.334715    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:13.551212    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:13.838269    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:14.058249    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:14.345416    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:14.549391    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:14.839461    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:15.059907    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:15.330964    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:15.555065    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:15.832934    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:16.055102    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:16.336711    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:16.555942    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:16.830993    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:17.051489    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:17.352506    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:17.567138    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:17.830971    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:18.050422    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:18.336210    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:18.559695    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:18.843597    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:19.061141    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:19.588933    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:19.589630    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:19.840767    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:20.057736    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:20.342198    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:20.562442    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:20.834057    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:21.061784    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:21.330789    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:21.548619    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:21.850416    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:22.234790    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:22.338522    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:22.554639    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:22.838117    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:23.055403    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:23.336867    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:23.548862    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:23.839558    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:24.060770    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:24.331994    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:24.555302    6648 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0604 21:37:24.840770    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:25.063276    6648 kapi.go:107] duration metric: took 2m35.0287191s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0604 21:37:25.334873    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:25.855837    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:26.335846    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:26.838254    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:27.338026    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:27.838458    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:28.332549    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:28.841637    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:29.331954    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:29.838985    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:30.335301    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:30.836284    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:31.338825    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:31.831856    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:32.339032    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:32.846060    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:33.330983    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:33.841576    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:34.345204    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:34.836723    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:35.332017    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:35.840579    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:36.347310    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:36.839410    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:37.344032    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:37.834557    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:38.093989    6648 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0604 21:37:38.094100    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:38.352421    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:38.560340    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:38.834451    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:39.250592    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:39.590276    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:39.597501    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:39.838602    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:40.058620    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:40.345192    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:40.561547    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:40.879407    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:41.054499    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:41.339091    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:41.557088    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:41.830982    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:42.065356    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:42.338884    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:42.559532    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:43.112850    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:43.120813    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:43.335410    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:43.565921    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:43.835745    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0604 21:37:44.052800    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:44.336158    6648 kapi.go:107] duration metric: took 2m53.5228081s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0604 21:37:44.551505    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:45.058952    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:45.559588    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:46.060737    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:46.560807    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:47.065492    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:47.557520    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:48.059561    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:48.554593    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:49.062272    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:49.565864    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:50.064939    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:50.564060    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:51.065470    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:51.566734    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:52.056230    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:52.553814    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:53.057658    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:53.558716    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:54.067529    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:54.558317    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:55.062265    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:55.562763    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:56.062721    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:56.565777    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:57.053600    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:57.559997    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:58.077509    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:58.558010    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:59.067742    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:37:59.586086    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:00.057313    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:00.567281    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:01.058443    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:01.556321    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:02.093363    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:02.559086    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:03.064213    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:03.552420    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:04.056434    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:04.555628    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:05.059729    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:05.561641    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:06.060382    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:06.557345    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:07.060649    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:07.559424    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:08.065588    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:08.553712    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:09.055926    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:09.556984    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:10.058843    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:10.566262    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:11.060135    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:11.552825    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:12.066535    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:12.566220    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:13.064521    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:13.561191    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:14.054059    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:14.563055    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:15.067986    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:15.559293    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:16.070333    6648 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0604 21:38:16.567606    6648 kapi.go:107] duration metric: took 3m22.5226204s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0604 21:38:16.571709    6648 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-369400 cluster.
	I0604 21:38:16.574472    6648 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0604 21:38:16.577179    6648 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0604 21:38:16.580308    6648 out.go:177] * Enabled addons: cloud-spanner, helm-tiller, nvidia-device-plugin, metrics-server, volcano, ingress-dns, storage-provisioner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0604 21:38:16.583514    6648 addons.go:510] duration metric: took 4m8.9842262s for enable addons: enabled=[cloud-spanner helm-tiller nvidia-device-plugin metrics-server volcano ingress-dns storage-provisioner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0604 21:38:16.583646    6648 start.go:245] waiting for cluster config update ...
	I0604 21:38:16.583694    6648 start.go:254] writing updated cluster config ...
	I0604 21:38:16.596446    6648 ssh_runner.go:195] Run: rm -f paused
	I0604 21:38:16.890362    6648 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 21:38:16.894963    6648 out.go:177] * Done! kubectl is now configured to use "addons-369400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 21:39:10 addons-369400 dockerd[1329]: time="2024-06-04T21:39:10.495111589Z" level=info msg="ignoring event" container=65e1dd61b2f89c6c65953e1c7d75dbb175a5fa2dfe77978431da77d6d9f201ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.497247707Z" level=info msg="shim disconnected" id=65e1dd61b2f89c6c65953e1c7d75dbb175a5fa2dfe77978431da77d6d9f201ce namespace=moby
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.497434108Z" level=warning msg="cleaning up after shim disconnected" id=65e1dd61b2f89c6c65953e1c7d75dbb175a5fa2dfe77978431da77d6d9f201ce namespace=moby
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.497463209Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.760161570Z" level=info msg="shim disconnected" id=fde18bf603d070c251d16f67c949b73995c3b7bfb9fba498fe35602f17b9c042 namespace=moby
	Jun 04 21:39:10 addons-369400 dockerd[1329]: time="2024-06-04T21:39:10.760609374Z" level=info msg="ignoring event" container=fde18bf603d070c251d16f67c949b73995c3b7bfb9fba498fe35602f17b9c042 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.760817175Z" level=warning msg="cleaning up after shim disconnected" id=fde18bf603d070c251d16f67c949b73995c3b7bfb9fba498fe35602f17b9c042 namespace=moby
	Jun 04 21:39:10 addons-369400 dockerd[1336]: time="2024-06-04T21:39:10.761167078Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.178584640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.178686941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.178707841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.178838142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:39:13 addons-369400 cri-dockerd[1233]: time="2024-06-04T21:39:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.882208279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.884426098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.884622599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:39:13 addons-369400 dockerd[1336]: time="2024-06-04T21:39:13.884851401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:39:14 addons-369400 dockerd[1329]: time="2024-06-04T21:39:14.025446030Z" level=info msg="ignoring event" container=d2938ce49cbacb00025019db7de07821014d346d624c0eb32bb4d730fd6f31aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 04 21:39:14 addons-369400 dockerd[1336]: time="2024-06-04T21:39:14.026535837Z" level=info msg="shim disconnected" id=d2938ce49cbacb00025019db7de07821014d346d624c0eb32bb4d730fd6f31aa namespace=moby
	Jun 04 21:39:14 addons-369400 dockerd[1336]: time="2024-06-04T21:39:14.026885439Z" level=warning msg="cleaning up after shim disconnected" id=d2938ce49cbacb00025019db7de07821014d346d624c0eb32bb4d730fd6f31aa namespace=moby
	Jun 04 21:39:14 addons-369400 dockerd[1336]: time="2024-06-04T21:39:14.027203942Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 04 21:39:15 addons-369400 dockerd[1329]: time="2024-06-04T21:39:15.770450112Z" level=info msg="ignoring event" container=5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 04 21:39:15 addons-369400 dockerd[1336]: time="2024-06-04T21:39:15.774140137Z" level=info msg="shim disconnected" id=5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd namespace=moby
	Jun 04 21:39:15 addons-369400 dockerd[1336]: time="2024-06-04T21:39:15.775727148Z" level=warning msg="cleaning up after shim disconnected" id=5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd namespace=moby
	Jun 04 21:39:15 addons-369400 dockerd[1336]: time="2024-06-04T21:39:15.775868949Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d2938ce49cbac       a416a98b71e22                                                                                                                                3 seconds ago        Exited              helper-pod                               0                   5f908ceffa09e       helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939
	5cabb40ae2682       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                                              19 seconds ago       Exited              busybox                                  0                   18395cf1b15d1       test-local-path
	fd82f3ca228de       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              25 seconds ago       Exited              helper-pod                               0                   22d7b2d14d7f4       helper-pod-create-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939
	d1e9cef667578       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:40402d51273ea7d281392557096333b5f62316a684f9bc9252214243840f757e                            26 seconds ago       Exited              gadget                                   4                   722b1995ab391       gadget-stm9h
	5b015fb5e1f81       ghcr.io/headlamp-k8s/headlamp@sha256:c48d3702275225be765218b1caffea7fc514ed31bc11533af71ffd1ee6f2fde1                                        27 seconds ago       Running             headlamp                                 0                   7ecba7cf44851       headlamp-7fc69f7444-hww9j
	a12a069b73411       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   a1ae2e750ccf3       gcp-auth-5db96cd9b4-824zf
	eef69b9bb005b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   d1923f4913921       csi-hostpathplugin-s7kdk
	0d532b55b14d2       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   d1923f4913921       csi-hostpathplugin-s7kdk
	52e60dfd13219       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   d1923f4913921       csi-hostpathplugin-s7kdk
	769c93c252c91       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   d1923f4913921       csi-hostpathplugin-s7kdk
	73ece9d2b46c6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   d1923f4913921       csi-hostpathplugin-s7kdk
	a1d4caaf038ee       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   ac5aa57d24ab6       ingress-nginx-controller-768f948f8f-q4t9f
	b76ed9369cb17       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   d1923f4913921       csi-hostpathplugin-s7kdk
	38321b5c0b60a       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   5530fbcd72cd7       volcano-admission-7b497cf95b-h4chg
	2b9023d037811       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   ab47304f02926       csi-hostpath-resizer-0
	c27bb05f13a66       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   a25363e70ec00       csi-hostpath-attacher-0
	61751e83cc1ca       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   d6341b3d191f2       volcano-scheduler-765f888978-vdbdr
	89bd1e81ffa63       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   b83a928ddd2ab       volcano-controller-86c5446455-h8n57
	c39cfa925d289       volcanosh/vc-webhook-manager@sha256:082b6a3b7b8b69d98541a8ea56958ef427fdba54ea555870799f8c9ec2754c1b                                         2 minutes ago        Exited              main                                     0                   862af25592ced       volcano-admission-init-9t2h8
	704762f424e1a       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   b177246a66c5c       ingress-nginx-admission-patch-b8nnf
	aa2d9ca61b7e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   32a03d47c92ff       ingress-nginx-admission-create-rqf6s
	14a5ddbfc2ae9       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   3896ec0e25bb4       snapshot-controller-745499f584-jd9jm
	76042887db243       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   be9907716c217       snapshot-controller-745499f584-cvsqh
	ecf69237deb99       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   4b2de015c027e       local-path-provisioner-8d985888d-47j92
	ab32e825ef3e3       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   c13addbc8c868       yakd-dashboard-5ddbf7d777-hbcmp
	5d984bb93137f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   575e0c2995ebe       kube-ingress-dns-minikube
	ad60362988949       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   4129f461a26b9       metrics-server-c59844bb4-jxg2b
	e7fd3d043cbb8       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   5cc874d980ac0       storage-provisioner
	225840cc31ed0       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   47b4dcf348af3       coredns-7db6d8ff4d-h99w6
	2529166f627e6       747097150317f                                                                                                                                5 minutes ago        Running             kube-proxy                               0                   c46ccd9a092ad       kube-proxy-82dw9
	4a84a8d2338d6       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   980704fcf5019       kube-controller-manager-addons-369400
	61acb010ac35d       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   7e713dbb41096       kube-apiserver-addons-369400
	31c56342a40b5       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   7067dc632dfef       etcd-addons-369400
	ba838560c7477       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   5a2b858f73440       kube-scheduler-addons-369400
	
	
	==> controller_ingress [a1d4caaf038e] <==
	W0604 21:37:24.002914       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0604 21:37:24.003792       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0604 21:37:24.013020       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.1" state="clean" commit="6911225c3f747e1cd9d109c305436d08b668f086" platform="linux/amd64"
	I0604 21:37:24.331598       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0604 21:37:24.367973       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0604 21:37:24.382221       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0604 21:37:24.419132       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7c661a9a-29fe-4234-881a-e35a4f831fe3", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0604 21:37:24.426226       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2e9c3c60-616b-4c17-b2fa-287822900356", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0604 21:37:24.426374       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"e9c28ce6-cb53-4a47-80de-7ca0a871c115", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0604 21:37:25.600213       7 nginx.go:307] "Starting NGINX process"
	I0604 21:37:25.600408       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0604 21:37:25.601336       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0604 21:37:25.601809       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0604 21:37:25.634744       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0604 21:37:25.635064       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-q4t9f"
	I0604 21:37:25.710987       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-q4t9f" node="addons-369400"
	I0604 21:37:25.751652       7 controller.go:210] "Backend successfully reloaded"
	I0604 21:37:25.752203       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0604 21:37:25.752971       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-q4t9f", UID:"d9844ab4-f995-4790-8e90-056976ff16dd", APIVersion:"v1", ResourceVersion:"755", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [225840cc31ed] <==
	[INFO] 10.244.0.8:33350 - 62285 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000552703s
	[INFO] 10.244.0.8:57270 - 43557 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062401s
	[INFO] 10.244.0.8:57270 - 35110 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000406s
	[INFO] 10.244.0.8:59945 - 32655 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000245301s
	[INFO] 10.244.0.8:59945 - 16010 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058001s
	[INFO] 10.244.0.8:47545 - 56808 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000355802s
	[INFO] 10.244.0.8:47545 - 29678 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145501s
	[INFO] 10.244.0.8:49903 - 9970 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000571804s
	[INFO] 10.244.0.8:49903 - 52144 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000626s
	[INFO] 10.244.0.8:56923 - 15340 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000175101s
	[INFO] 10.244.0.8:56923 - 29409 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000359s
	[INFO] 10.244.0.8:50131 - 2057 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000358002s
	[INFO] 10.244.0.8:50131 - 21262 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107601s
	[INFO] 10.244.0.8:42920 - 1660 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043301s
	[INFO] 10.244.0.8:42920 - 34846 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000395s
	[INFO] 10.244.0.26:49111 - 54026 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000402503s
	[INFO] 10.244.0.26:54072 - 42692 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216802s
	[INFO] 10.244.0.26:50695 - 48836 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189201s
	[INFO] 10.244.0.26:54458 - 15312 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000399803s
	[INFO] 10.244.0.26:36322 - 38234 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001047s
	[INFO] 10.244.0.26:59752 - 40186 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001802611s
	[INFO] 10.244.0.26:47551 - 21312 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002471214s
	[INFO] 10.244.0.26:49494 - 45499 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001376208s
	[INFO] 10.244.0.28:38289 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000718404s
	[INFO] 10.244.0.28:46460 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000292602s
	
	
	==> describe nodes <==
	Name:               addons-369400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-369400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=addons-369400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T21_33_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-369400
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-369400"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 21:33:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-369400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 21:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 21:38:59 +0000   Tue, 04 Jun 2024 21:33:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 21:38:59 +0000   Tue, 04 Jun 2024 21:33:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 21:38:59 +0000   Tue, 04 Jun 2024 21:33:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 21:38:59 +0000   Tue, 04 Jun 2024 21:33:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.139.74
	  Hostname:    addons-369400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b636580d1b4e51aaed0b266905a25e
	  System UUID:                31be6e36-9d04-ca49-81aa-695ef07cf49b
	  Boot ID:                    f9b1ad38-8a7a-4f54-8edf-305bab2fb220
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-stm9h                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  gcp-auth                    gcp-auth-5db96cd9b4-824zf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  headlamp                    headlamp-7fc69f7444-hww9j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-q4t9f    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m32s
	  kube-system                 coredns-7db6d8ff4d-h99w6                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m7s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 csi-hostpathplugin-s7kdk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 etcd-addons-369400                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-apiserver-addons-369400                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-controller-manager-addons-369400        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-proxy-82dw9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-addons-369400                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 metrics-server-c59844bb4-jxg2b               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m40s
	  kube-system                 snapshot-controller-745499f584-cvsqh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 snapshot-controller-745499f584-jd9jm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  local-path-storage          local-path-provisioner-8d985888d-47j92       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  volcano-system              volcano-admission-7b497cf95b-h4chg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  volcano-system              volcano-controller-86c5446455-h8n57          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  volcano-system              volcano-scheduler-765f888978-vdbdr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-hbcmp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m57s                  kube-proxy       
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m34s)  kubelet          Node addons-369400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m34s)  kubelet          Node addons-369400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m34s)  kubelet          Node addons-369400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m25s                  kubelet          Node addons-369400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s                  kubelet          Node addons-369400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s                  kubelet          Node addons-369400 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m22s                  kubelet          Node addons-369400 status is now: NodeReady
	  Normal  RegisteredNode           5m11s                  node-controller  Node addons-369400 event: Registered Node addons-369400 in Controller
	
	
	==> dmesg <==
	[  +5.001336] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.193602] kauditd_printk_skb: 66 callbacks suppressed
	[  +5.337578] kauditd_printk_skb: 60 callbacks suppressed
	[Jun 4 21:35] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 4 21:36] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.078049] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.019673] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.071949] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.381104] kauditd_printk_skb: 8 callbacks suppressed
	[Jun 4 21:37] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.896897] hrtimer: interrupt took 1319606 ns
	[  +3.234100] kauditd_printk_skb: 34 callbacks suppressed
	[ +17.320497] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.117299] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.875961] kauditd_printk_skb: 2 callbacks suppressed
	[Jun 4 21:38] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.075478] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.950460] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.729693] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.406294] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.348751] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.029825] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.430039] kauditd_printk_skb: 31 callbacks suppressed
	[Jun 4 21:39] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.284977] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [31c56342a40b] <==
	{"level":"warn","ts":"2024-06-04T21:38:41.898055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.557988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-04T21:38:41.898121Z","caller":"traceutil/trace.go:171","msg":"trace[1633573947] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:0; response_revision:1667; }","duration":"191.658889ms","start":"2024-06-04T21:38:41.706452Z","end":"2024-06-04T21:38:41.898111Z","steps":["trace[1633573947] 'agreement among raft nodes before linearized reading'  (duration: 190.539982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:41.898638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.776603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.20.139.74\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-06-04T21:38:41.898835Z","caller":"traceutil/trace.go:171","msg":"trace[1616699925] range","detail":"{range_begin:/registry/masterleases/172.20.139.74; range_end:; response_count:1; response_revision:1667; }","duration":"178.010804ms","start":"2024-06-04T21:38:41.720814Z","end":"2024-06-04T21:38:41.898824Z","steps":["trace[1616699925] 'agreement among raft nodes before linearized reading'  (duration: 177.685802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:41.897847Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:38:41.573988Z","time spent":"323.801308ms","remote":"127.0.0.1:34282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":703,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/test-pvc.17d5eae982e14314\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/test-pvc.17d5eae982e14314\" value_size:635 lease:1849448811439678451 >> failure:<>"}
	{"level":"info","ts":"2024-06-04T21:38:42.037251Z","caller":"traceutil/trace.go:171","msg":"trace[670895428] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"121.098751ms","start":"2024-06-04T21:38:41.916126Z","end":"2024-06-04T21:38:42.037225Z","steps":["trace[670895428] 'process raft request'  (duration: 92.691075ms)","trace[670895428] 'compare'  (duration: 28.045374ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:38:48.75633Z","caller":"traceutil/trace.go:171","msg":"trace[1501286756] linearizableReadLoop","detail":"{readStateIndex:1786; appliedIndex:1785; }","duration":"130.914551ms","start":"2024-06-04T21:38:48.625395Z","end":"2024-06-04T21:38:48.75631Z","steps":["trace[1501286756] 'read index received'  (duration: 130.494848ms)","trace[1501286756] 'applied index is now lower than readState.Index'  (duration: 417.203µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:38:48.756988Z","caller":"traceutil/trace.go:171","msg":"trace[266958613] transaction","detail":"{read_only:false; response_revision:1700; number_of_response:1; }","duration":"151.986604ms","start":"2024-06-04T21:38:48.604988Z","end":"2024-06-04T21:38:48.756975Z","steps":["trace[266958613] 'process raft request'  (duration: 151.122198ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:48.75752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.228261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-06-04T21:38:48.75892Z","caller":"traceutil/trace.go:171","msg":"trace[1189253173] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1700; }","duration":"133.634671ms","start":"2024-06-04T21:38:48.625272Z","end":"2024-06-04T21:38:48.758906Z","steps":["trace[1189253173] 'agreement among raft nodes before linearized reading'  (duration: 131.948059ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:48.759143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.169646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939\" ","response":"range_response_count:1 size:4204"}
	{"level":"info","ts":"2024-06-04T21:38:48.759173Z","caller":"traceutil/trace.go:171","msg":"trace[1421196489] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939; range_end:; response_count:1; response_revision:1700; }","duration":"130.220746ms","start":"2024-06-04T21:38:48.628944Z","end":"2024-06-04T21:38:48.759165Z","steps":["trace[1421196489] 'agreement among raft nodes before linearized reading'  (duration: 130.141445ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T21:38:48.909002Z","caller":"traceutil/trace.go:171","msg":"trace[1774526394] transaction","detail":"{read_only:false; response_revision:1701; number_of_response:1; }","duration":"140.955824ms","start":"2024-06-04T21:38:48.767937Z","end":"2024-06-04T21:38:48.908893Z","steps":["trace[1774526394] 'process raft request'  (duration: 88.14214ms)","trace[1774526394] 'compare'  (duration: 52.659883ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T21:38:49.543666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.872644ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11072820848294454949 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1679 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-04T21:38:49.543778Z","caller":"traceutil/trace.go:171","msg":"trace[1080115969] linearizableReadLoop","detail":"{readStateIndex:1788; appliedIndex:1787; }","duration":"491.054366ms","start":"2024-06-04T21:38:49.052711Z","end":"2024-06-04T21:38:49.543766Z","steps":["trace[1080115969] 'read index received'  (duration: 236.910621ms)","trace[1080115969] 'applied index is now lower than readState.Index'  (duration: 254.142845ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T21:38:49.544001Z","caller":"traceutil/trace.go:171","msg":"trace[1623545533] transaction","detail":"{read_only:false; response_revision:1702; number_of_response:1; }","duration":"548.932887ms","start":"2024-06-04T21:38:48.995056Z","end":"2024-06-04T21:38:49.543989Z","steps":["trace[1623545533] 'process raft request'  (duration: 294.61924ms)","trace[1623545533] 'compare'  (duration: 253.793843ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T21:38:49.544123Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:38:48.995039Z","time spent":"548.985787ms","remote":"127.0.0.1:34434","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1679 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-06-04T21:38:49.544416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"491.700371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5807"}
	{"level":"info","ts":"2024-06-04T21:38:49.544444Z","caller":"traceutil/trace.go:171","msg":"trace[39930800] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1702; }","duration":"491.757171ms","start":"2024-06-04T21:38:49.052678Z","end":"2024-06-04T21:38:49.544436Z","steps":["trace[39930800] 'agreement among raft nodes before linearized reading'  (duration: 491.63667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:49.544466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:38:49.052661Z","time spent":"491.799472ms","remote":"127.0.0.1:34380","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":5831,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-06-04T21:38:49.545838Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.306767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3964"}
	{"level":"info","ts":"2024-06-04T21:38:49.545878Z","caller":"traceutil/trace.go:171","msg":"trace[991199430] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1702; }","duration":"367.378769ms","start":"2024-06-04T21:38:49.17849Z","end":"2024-06-04T21:38:49.545868Z","steps":["trace[991199430] 'agreement among raft nodes before linearized reading'  (duration: 367.267668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T21:38:49.545903Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T21:38:49.178474Z","time spent":"367.422869ms","remote":"127.0.0.1:34380","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3988,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"warn","ts":"2024-06-04T21:38:49.546175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.248272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1412"}
	{"level":"info","ts":"2024-06-04T21:38:49.546202Z","caller":"traceutil/trace.go:171","msg":"trace[1093047927] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1702; }","duration":"285.277172ms","start":"2024-06-04T21:38:49.260917Z","end":"2024-06-04T21:38:49.546195Z","steps":["trace[1093047927] 'agreement among raft nodes before linearized reading'  (duration: 285.191971ms)"],"step_count":1}
	
	
	==> gcp-auth [a12a069b7341] <==
	2024/06/04 21:38:15 GCP Auth Webhook started!
	2024/06/04 21:38:22 Ready to marshal response ...
	2024/06/04 21:38:22 Ready to write response ...
	2024/06/04 21:38:27 Ready to marshal response ...
	2024/06/04 21:38:27 Ready to write response ...
	2024/06/04 21:38:34 Ready to marshal response ...
	2024/06/04 21:38:34 Ready to write response ...
	2024/06/04 21:38:35 Ready to marshal response ...
	2024/06/04 21:38:35 Ready to write response ...
	2024/06/04 21:38:35 Ready to marshal response ...
	2024/06/04 21:38:35 Ready to write response ...
	2024/06/04 21:38:41 Ready to marshal response ...
	2024/06/04 21:38:41 Ready to write response ...
	2024/06/04 21:38:42 Ready to marshal response ...
	2024/06/04 21:38:42 Ready to write response ...
	2024/06/04 21:39:12 Ready to marshal response ...
	2024/06/04 21:39:12 Ready to write response ...
	
	
	==> kernel <==
	 21:39:17 up 7 min,  0 users,  load average: 2.18, 2.30, 1.18
	Linux addons-369400 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [61acb010ac35] <==
	W0604 21:37:06.769026       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	W0604 21:37:07.836483       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	W0604 21:37:08.924098       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	W0604 21:37:09.951304       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	W0604 21:37:11.000613       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	W0604 21:37:12.051346       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.163.161:443: connect: connection refused
	I0604 21:37:12.227085       1 trace.go:236] Trace[422265007]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.20.139.74,type:*v1.Endpoints,resource:apiServerIPInfo (04-Jun-2024 21:37:11.708) (total time: 518ms):
	Trace[422265007]: ---"initial value restored" 247ms (21:37:11.956)
	Trace[422265007]: ---"Transaction prepared" 265ms (21:37:12.221)
	Trace[422265007]: [518.216616ms] [518.216616ms] END
	W0604 21:37:37.908383       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	E0604 21:37:37.908542       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	W0604 21:37:56.962919       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	E0604 21:37:56.962999       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	W0604 21:37:57.141039       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	E0604 21:37:57.141145       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.137.252:443: connect: connection refused
	E0604 21:38:28.014702       1 conn.go:339] Error on socket receive: read tcp 172.20.139.74:8443->172.20.128.1:62620: use of closed network connection
	I0604 21:38:34.981004       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.144.31"}
	I0604 21:38:49.598592       1 trace.go:236] Trace[1874610134]: "Update" accept:application/json, */*,audit-id:4d4c670d-439c-43b6-b807-78dbd393d4a0,client:10.244.0.21,api-group:coordination.k8s.io,api-version:v1,name:external-health-monitor-leader-hostpath-csi-k8s-io,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-health-monitor-leader-hostpath-csi-k8s-io,user-agent:csi-external-health-monitor-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (04-Jun-2024 21:38:48.993) (total time: 604ms):
	Trace[1874610134]: ["GuaranteedUpdate etcd3" audit-id:4d4c670d-439c-43b6-b807-78dbd393d4a0,key:/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io,type:*coordination.Lease,resource:leases.coordination.k8s.io 604ms (21:38:48.993)
	Trace[1874610134]:  ---"Txn call completed" 603ms (21:38:49.597)]
	Trace[1874610134]: [604.776792ms] [604.776792ms] END
	I0604 21:38:49.603644       1 trace.go:236] Trace[108806192]: "List" accept:application/json, */*,audit-id:e14be8c1-5b1a-4baa-a4cb-3624a89fac16,client:172.20.128.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (04-Jun-2024 21:38:49.051) (total time: 551ms):
	Trace[108806192]: ["List(recursive=true) etcd3" audit-id:e14be8c1-5b1a-4baa-a4cb-3624a89fac16,key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: 551ms (21:38:49.051)]
	Trace[108806192]: [551.892709ms] [551.892709ms] END
	
	
	==> kube-controller-manager [4a84a8d2338d] <==
	I0604 21:38:01.017448       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:01.042455       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:01.677323       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:01.833487       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:02.033247       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:02.051743       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:02.063685       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:02.073867       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:02.107225       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:02.119449       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:16.323044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="25.241344ms"
	I0604 21:38:16.324099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="37µs"
	I0604 21:38:32.034130       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:32.043930       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:32.166163       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0604 21:38:32.171945       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0604 21:38:35.191660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="128.900751ms"
	I0604 21:38:35.224127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="32.423389ms"
	I0604 21:38:35.224886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="679.303µs"
	I0604 21:38:35.237481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="144.101µs"
	I0604 21:38:47.702006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="5µs"
	I0604 21:38:50.823032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="29.094003ms"
	I0604 21:38:50.823181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7fc69f7444" duration="39.7µs"
	I0604 21:38:52.632860       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="11.4µs"
	I0604 21:39:10.367047       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6fcd4f6f98" duration="5.8µs"
	
	
	==> kube-proxy [2529166f627e] <==
	I0604 21:34:18.304272       1 server_linux.go:69] "Using iptables proxy"
	I0604 21:34:18.584697       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.139.74"]
	I0604 21:34:19.381375       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 21:34:19.381526       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 21:34:19.381614       1 server_linux.go:165] "Using iptables Proxier"
	I0604 21:34:19.414903       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 21:34:19.415262       1 server.go:872] "Version info" version="v1.30.1"
	I0604 21:34:19.415290       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 21:34:19.435911       1 config.go:192] "Starting service config controller"
	I0604 21:34:19.435946       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 21:34:19.435984       1 config.go:101] "Starting endpoint slice config controller"
	I0604 21:34:19.435994       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 21:34:19.450951       1 config.go:319] "Starting node config controller"
	I0604 21:34:19.450997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 21:34:19.577793       1 shared_informer.go:320] Caches are synced for node config
	I0604 21:34:19.577913       1 shared_informer.go:320] Caches are synced for service config
	I0604 21:34:19.594515       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ba838560c747] <==
	E0604 21:33:50.319742       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0604 21:33:50.414918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0604 21:33:50.415287       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0604 21:33:50.437426       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 21:33:50.437752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 21:33:50.452483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0604 21:33:50.452685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0604 21:33:50.478397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 21:33:50.478834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 21:33:50.579489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0604 21:33:50.579593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0604 21:33:50.743499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0604 21:33:50.743542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0604 21:33:50.827480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0604 21:33:50.828347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0604 21:33:50.847365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0604 21:33:50.847634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0604 21:33:50.902951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0604 21:33:50.902992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0604 21:33:50.968372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0604 21:33:50.968448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0604 21:33:53.330746       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0604 21:38:42.243640       1 trace.go:236] Trace[2113657771]: "Scheduling" namespace:default,name:test-local-path (04-Jun-2024 21:38:42.039) (total time: 171ms):
	Trace[2113657771]: ---"Computing predicates done" 171ms (21:38:42.210)
	Trace[2113657771]: [171.391563ms] [171.391563ms] END
	
	
	==> kubelet <==
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.619756    2131 topology_manager.go:215] "Topology Admit Handler" podUID="0b6c1522-d7d1-447a-8819-81869d226103" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: E0604 21:39:12.620489    2131 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="084f21cc-3cae-47db-9530-3db3df3010ef" containerName="cloud-spanner-emulator"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: E0604 21:39:12.620746    2131 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bd83c11-b4f2-4684-8c1b-331e4f0920d7" containerName="busybox"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.620980    2131 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd83c11-b4f2-4684-8c1b-331e4f0920d7" containerName="busybox"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.621105    2131 memory_manager.go:354] "RemoveStaleState removing state" podUID="084f21cc-3cae-47db-9530-3db3df3010ef" containerName="cloud-spanner-emulator"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.707647    2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-data\") pod \"helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") " pod="local-path-storage/helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.707857    2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pnn8\" (UniqueName: \"kubernetes.io/projected/0b6c1522-d7d1-447a-8819-81869d226103-kube-api-access-9pnn8\") pod \"helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") " pod="local-path-storage/helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.707895    2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-gcp-creds\") pod \"helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") " pod="local-path-storage/helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.707921    2131 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0b6c1522-d7d1-447a-8819-81869d226103-script\") pod \"helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") " pod="local-path-storage/helper-pod-delete-pvc-d2e31ec4-d787-4fa8-8e02-97096b762939"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.764171    2131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084f21cc-3cae-47db-9530-3db3df3010ef" path="/var/lib/kubelet/pods/084f21cc-3cae-47db-9530-3db3df3010ef/volumes"
	Jun 04 21:39:12 addons-369400 kubelet[2131]: I0604 21:39:12.766734    2131 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd83c11-b4f2-4684-8c1b-331e4f0920d7" path="/var/lib/kubelet/pods/7bd83c11-b4f2-4684-8c1b-331e4f0920d7/volumes"
	Jun 04 21:39:13 addons-369400 kubelet[2131]: I0604 21:39:13.494125    2131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd"
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.946281    2131 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-gcp-creds\") pod \"0b6c1522-d7d1-447a-8819-81869d226103\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") "
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.947404    2131 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pnn8\" (UniqueName: \"kubernetes.io/projected/0b6c1522-d7d1-447a-8819-81869d226103-kube-api-access-9pnn8\") pod \"0b6c1522-d7d1-447a-8819-81869d226103\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") "
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.947706    2131 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-data\") pod \"0b6c1522-d7d1-447a-8819-81869d226103\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") "
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.946422    2131 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0b6c1522-d7d1-447a-8819-81869d226103" (UID: "0b6c1522-d7d1-447a-8819-81869d226103"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.949036    2131 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0b6c1522-d7d1-447a-8819-81869d226103-script\") pod \"0b6c1522-d7d1-447a-8819-81869d226103\" (UID: \"0b6c1522-d7d1-447a-8819-81869d226103\") "
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.949190    2131 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-gcp-creds\") on node \"addons-369400\" DevicePath \"\""
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.949282    2131 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-data" (OuterVolumeSpecName: "data") pod "0b6c1522-d7d1-447a-8819-81869d226103" (UID: "0b6c1522-d7d1-447a-8819-81869d226103"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.950187    2131 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b6c1522-d7d1-447a-8819-81869d226103-script" (OuterVolumeSpecName: "script") pod "0b6c1522-d7d1-447a-8819-81869d226103" (UID: "0b6c1522-d7d1-447a-8819-81869d226103"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jun 04 21:39:15 addons-369400 kubelet[2131]: I0604 21:39:15.954784    2131 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6c1522-d7d1-447a-8819-81869d226103-kube-api-access-9pnn8" (OuterVolumeSpecName: "kube-api-access-9pnn8") pod "0b6c1522-d7d1-447a-8819-81869d226103" (UID: "0b6c1522-d7d1-447a-8819-81869d226103"). InnerVolumeSpecName "kube-api-access-9pnn8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 04 21:39:16 addons-369400 kubelet[2131]: I0604 21:39:16.049940    2131 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9pnn8\" (UniqueName: \"kubernetes.io/projected/0b6c1522-d7d1-447a-8819-81869d226103-kube-api-access-9pnn8\") on node \"addons-369400\" DevicePath \"\""
	Jun 04 21:39:16 addons-369400 kubelet[2131]: I0604 21:39:16.050134    2131 reconciler_common.go:289] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0b6c1522-d7d1-447a-8819-81869d226103-data\") on node \"addons-369400\" DevicePath \"\""
	Jun 04 21:39:16 addons-369400 kubelet[2131]: I0604 21:39:16.050178    2131 reconciler_common.go:289] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/0b6c1522-d7d1-447a-8819-81869d226103-script\") on node \"addons-369400\" DevicePath \"\""
	Jun 04 21:39:16 addons-369400 kubelet[2131]: I0604 21:39:16.673130    2131 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f908ceffa09ecdae9cdbf9beff56d9526201da4c97778ead8d6a97aa7a372fd"
	
	
	==> storage-provisioner [e7fd3d043cbb] <==
	I0604 21:34:44.576432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0604 21:34:44.639930       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0604 21:34:44.639993       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0604 21:34:44.809070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0604 21:34:44.809373       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-369400_950f9a36-d534-4ac2-a56d-b6e3e47f39ce!
	I0604 21:34:44.811966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"329bab8f-bca4-486e-b680-5b8a5385ca9c", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-369400_950f9a36-d534-4ac2-a56d-b6e3e47f39ce became leader
	I0604 21:34:44.909993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-369400_950f9a36-d534-4ac2-a56d-b6e3e47f39ce!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:39:07.500450   11192 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-369400 -n addons-369400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-369400 -n addons-369400: (14.2302131s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-369400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-rqf6s ingress-nginx-admission-patch-b8nnf volcano-admission-init-9t2h8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-369400 describe pod nginx ingress-nginx-admission-create-rqf6s ingress-nginx-admission-patch-b8nnf volcano-admission-init-9t2h8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-369400 describe pod nginx ingress-nginx-admission-create-rqf6s ingress-nginx-admission-patch-b8nnf volcano-admission-init-9t2h8: exit status 1 (508.8376ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-369400/172.20.139.74
	Start Time:       Tue, 04 Jun 2024 21:39:27 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vczhg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vczhg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/nginx to addons-369400
	  Normal  Pulling    6s    kubelet            Pulling image "docker.io/nginx:alpine"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/nginx:alpine" in 5.261s (5.261s including waiting). Image size: 48311676 bytes.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqf6s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b8nnf" not found
	Error from server (NotFound): pods "volcano-admission-init-9t2h8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-369400 describe pod nginx ingress-nginx-admission-create-rqf6s ingress-nginx-admission-patch-b8nnf volcano-admission-init-9t2h8: exit status 1
--- FAIL: TestAddons/parallel/Registry (77.49s)

                                                
                                    
x
+
TestCertOptions (10800.462s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-065400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0605 00:23:17.056292   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-065400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (6m17.8346341s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-065400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-065400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.7788263s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-065400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-065400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-065400 -- "sudo cat /etc/kubernetes/admin.conf": (10.7846661s)
helpers_test.go:175: Cleaning up "cert-options-065400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-065400
panic: test timed out after 3h0m0s
running tests:
	TestCertOptions (6m53s)
	TestNetworkPlugins (31m13s)
	TestNetworkPlugins/group/auto (3m25s)
	TestNetworkPlugins/group/auto/Start (3m25s)
	TestNetworkPlugins/group/kindnet (2m18s)
	TestNetworkPlugins/group/kindnet/Start (2m18s)
	TestPause (4m26s)
	TestPause/serial (4m26s)
	TestPause/serial/Start (4m26s)
	TestStartStop (21m42s)

                                                
                                                
goroutine 2375 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 27 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005c0ea0, 0xc00089dbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000670558, {0x4eb2020, 0x2a, 0x2a}, {0x2ae6712?, 0x92806f?, 0x4ed52a0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006cbcc0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006cbcc0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00010fa80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 26
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 173 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000926680, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2298 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008f2b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0008f2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0008f2b60, 0xc000211500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2295 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000642820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000642820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000642820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000642820, 0xc000211400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 172 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a8f380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 687 [syscall, locked to thread]:
syscall.SyscallN(0x7ffbce424de0?, {0xc0014316a0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x46c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000051320)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0009c4b00)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0009c4b00)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000d75040, 0xc0009c4b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.Cleanup(0xc000d75040, {0xc00057efc0, 0x13}, 0xc001e06060)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:178 +0x15f
k8s.io/minikube/test/integration.CleanupWithLogs(0xc000d75040, {0xc00057efc0, 0x13}, 0xc001e06060)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:192 +0x19d
k8s.io/minikube/test/integration.TestCertOptions(0xc000d75040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:109 +0x1090
testing.tRunner(0xc000d75040, 0x35967c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0006431e0, {0x2a8a81e?, 0x3ae5db0?}, 0xc000d620c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006431e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0006431e0, 0xc0007c6a80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 185 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b10d20, 0xc000054420}, 0xc00156ff50, 0xc00156ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b10d20, 0xc000054420}, 0x90?, 0xc00156ff50, 0xc00156ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b10d20?, 0xc000054420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00156ffd0?, 0x9fe404?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 184 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000926650, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x257f780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a8f260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000926680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000067490, {0x3aed240, 0xc00095f290}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000067490, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2359 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc00151a000, 0xc000742a80)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2356
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2129 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000642ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000642ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000642ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000642ea0, 0xc0007c6980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2128 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000642d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000642d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000642d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000642d00, 0xc0007c6780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 186 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 185
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2162 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000643040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000643040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000643040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000643040, 0xc0007c6a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2318 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffbce424de0?, {0xc000851bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x724, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0019b0600)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0009c49a0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0009c49a0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008f2d00, 0xc0009c49a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0008f2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0008f2d00, 0xc000d620c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2163
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1047 [chan send, 149 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b742c0, 0xc001921680)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1046
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2127 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006429c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006429c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006429c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006429c0, 0xc0007c6100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2296 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000643a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000643a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000643a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000643a00, 0xc000211440)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2131 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008f2000, 0xc00080c738)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2027
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2282 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a4cb60, 0x3596ac0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2080
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2166 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006436c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006436c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0006436c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0006436c0, 0xc0007c6c00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2165 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000643520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000643520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000643520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000643520, 0xc0007c6b80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2358 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xc0008e8000?, {0xc001939b20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0xc001939b80?, 0x87fdd6?, 0x4f62700?, 0xc001939c08?, 0x87281b?, 0x868ba6?, 0x41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x444, {0xc00067f93a?, 0x2c6, 0xc00067f800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001537908?, {0xc00067f93a?, 0x8ac171?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001537908, {0xc00067f93a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a8c688, {0xc00067f93a?, 0xc000a93a40?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001574270, {0x3aebe00, 0xc0000a65e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc001574270}, {0x3aebe00, 0xc0000a65e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001939e78?, {0x3aebf40, 0xc001574270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc001574270?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc001574270}, {0x3aebec0, 0xc000a8c688}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0007425a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2356
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2320 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc00140fb20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00140fb59?, 0xc00140fb80?, 0x87fdd6?, 0x4f62700?, 0xc00140fc08?, 0x872985?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4f4, {0xc00080bca1?, 0x35f, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007ee788?, {0xc00080bca1?, 0x8ac1be?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007ee788, {0xc00080bca1, 0x35f, 0x35f})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a8c720, {0xc00080bca1?, 0xc0017af500?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000d621e0, {0x3aebe00, 0xc0000a6560})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc000d621e0}, {0x3aebe00, 0xc0000a6560}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00140fe78?, {0x3aebf40, 0xc000d621e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc000d621e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc000d621e0}, {0x3aebec0, 0xc000a8c720}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000742c00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2355 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0005c1520, {0x2a8a81e?, 0x24?}, 0xc001412000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0005c1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0005c1520, 0xc001574030)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2029
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2164 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000643380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000643380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000643380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000643380, 0xc0007c6b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2390 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000d77b20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000d77b98?, 0xc000d77b80?, 0x87fdd6?, 0x4f62700?, 0xc000d77c08?, 0x872985?, 0x191c2120108?, 0x3b10a41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x538, {0xc000d0ed3a?, 0x2c6, 0xc000d0ec00?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007efb88?, {0xc000d0ed3a?, 0x8ac1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007efb88, {0xc000d0ed3a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000112970, {0xc000d0ed3a?, 0xc0018f81c0?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0007b38c0, {0x3aebe00, 0xc000a8c730})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc0007b38c0}, {0x3aebe00, 0xc000a8c730}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000d77e78?, {0x3aebf40, 0xc0007b38c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc0007b38c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc0007b38c0}, {0x3aebec0, 0xc000112970}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001920360?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 687
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2126 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0008f21a0, {0x2a8a81e?, 0x3ae5db0?}, 0xc000976000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008f21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0008f21a0, 0xc00051a200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2131
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2357 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001a03b20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4d?, 0xc001a03b80?, 0x87fdd6?, 0x4f62700?, 0xc001a03c08?, 0x872985?, 0x191c2120a28?, 0x4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x738, {0xc000914a28?, 0x5d8, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001537188?, {0xc000914a28?, 0x8ac1be?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001537188, {0xc000914a28, 0x5d8, 0x5d8})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a8c650, {0xc000914a28?, 0xc001a03d98?, 0x227?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001574240, {0x3aebe00, 0xc0001129d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc001574240}, {0x3aebe00, 0xc0001129d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3aebf40, 0xc001574240})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc001574240?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc001574240}, {0x3aebec0, 0xc000a8c650}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001920ea0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2356
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2356 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffbce424de0?, {0xc001411a78?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x748, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000051cb0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00151a000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00151a000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0005c1860, 0xc00151a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFreshStart({0x3b10b60, 0xc0007c8310}, 0xc0005c1860, {0xc00189c0a0, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:80 +0x275
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0005c1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc0005c1860, 0xc001412000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2355
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2363 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffbce424de0?, {0xc000d61bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x760, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001974720)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00151a420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00151a420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0005c1a00, 0xc00151a420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0005c1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0005c1a00, 0xc000976000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2126
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2283 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a4cd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a4cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a4cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a4cd00, 0xc0017d0080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 734 [IO wait, 159 minutes]:
internal/poll.runtime_pollWait(0x191e78d67a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x87fdd6?, 0x4f62700?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001850ca0, 0xc001c85bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc001850c88, 0x2f8, {0xc000808000?, 0x0?, 0x0?}, 0xc000100808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc001850c88, 0xc001c85d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc001850c88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00087a1e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00087a1e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005ea0f0, {0x3b03dc0, 0xc00087a1e0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0005ea0f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000d74ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 731
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2391 [select]:
os/exec.(*Cmd).watchCtx(0xc0009c4b00, 0xc001920f60)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 687
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 889 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019cacc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2321 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0009c49a0, 0xc001920300)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1166 [chan send, 141 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b75a20, 0xc000054a80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 898
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2027 [chan receive, 31 minutes]:
testing.(*T).Run(0xc000a4c340, {0x2a8a819?, 0x8df48d?}, 0xc00080c738)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000a4c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000a4c340, 0x35968a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2297 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000643d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000643d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000643d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000643d40, 0xc000211480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 890 [chan receive, 149 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001412f80, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 870 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001412f50, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x257f780?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019caba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001412f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000993240, {0x3aed240, 0xc0006a2660}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000993240, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 890
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 871 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b10d20, 0xc000054420}, 0xc0008fbf50, 0xc0008fbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b10d20, 0xc000054420}, 0x11?, 0xc0008fbf50, 0xc0008fbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b10d20?, 0xc000054420?}, 0xc000a4c1a0?, 0x9b7c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x9b8bc5?, 0xc000a4c1a0?, 0xc000a434c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 890
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 872 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 871
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2294 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc000924640)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000642680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000642680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000642680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000642680, 0xc000210640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2080 [chan receive, 21 minutes]:
testing.(*T).Run(0xc000a4cea0, {0x2a8a819?, 0x9b7333?}, 0x3596ac0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc000a4cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000a4cea0, 0x35968e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2319 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc001c81b20?, 0xc0005ea1e0?, 0xf?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001c81ba0?, 0x9fc799?, 0xc0000506f9?, 0x1e?, 0xc001c81c08?, 0x87281b?, 0xc00179a340?, 0xc00151a420?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x740, {0xc0008999ef?, 0x211, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007ee288?, {0xc0008999ef?, 0x13?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007ee288, {0xc0008999ef, 0x211, 0x211})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a8c708, {0xc0008999ef?, 0xc001b2e000?, 0x6a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000d62180, {0x3aebe00, 0xc000112888})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc000d62180}, {0x3aebe00, 0xc000112888}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001c81e78?, {0x3aebf40, 0xc000d62180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc000d62180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc000d62180}, {0x3aebec0, 0xc000a8c708}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001920180?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2029 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000a4c9c0, {0x2a8bd2c?, 0xd18c2e2800?}, 0xc001574030)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc000a4c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc000a4c9c0, 0x35968b8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2364 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc00058082a?, {0xc001b11b20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x41?, 0xc001b11b80?, 0x87fdd6?, 0x4f62700?, 0xc001b11c08?, 0x87281b?, 0x191c2120598?, 0xc001b11c35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x764, {0xc0008989e6?, 0x21a, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001550a08?, {0xc0008989e6?, 0x8a5170?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001550a08, {0xc0008989e6, 0x21a, 0x21a})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000112308, {0xc0008989e6?, 0xc001b11d98?, 0x67?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000976120, {0x3aebe00, 0xc000a8c550})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc000976120}, {0x3aebe00, 0xc000a8c550}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3aebf40, 0xc000976120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc000976120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc000976120}, {0x3aebec0, 0xc000112308}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000730300?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2366 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc00151a420, 0xc000055800)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2365 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x8e8e3d?, {0xc00193bb20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000a14410?, 0xc00193bb80?, 0x87fdd6?, 0x4f62700?, 0xc00193bc08?, 0x872985?, 0x191c2120598?, 0xc00193bb67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x734, {0xc000751d17?, 0x2e9, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001551908?, {0xc000751d17?, 0x9b4e65?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001551908, {0xc000751d17, 0x2e9, 0x2e9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0001126c8, {0xc000751d17?, 0x191c212da88?, 0xe08?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000976150, {0x3aebe00, 0xc0000a6488})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc000976150}, {0x3aebe00, 0xc0000a6488}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00193be78?, {0x3aebf40, 0xc000976150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc000976150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc000976150}, {0x3aebec0, 0xc0001126c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000730240?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2363
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2389 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc001b0fb20?, 0x887ea5?, 0x4f62700?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0001fed41?, 0xc001b0fb80?, 0x87fdd6?, 0x4f62700?, 0xc001b0fc08?, 0x872985?, 0x191c2120a28?, 0x41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x40c, {0xc00067f45c?, 0x3a4, 0x92417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0007ef408?, {0xc00067f45c?, 0x8ac1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0007ef408, {0xc00067f45c, 0x3a4, 0x3a4})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0001128e0, {0xc00067f45c?, 0xc001b0fd98?, 0x2b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0007b3770, {0x3aebe00, 0xc0008125e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3aebf40, 0xc0007b3770}, {0x3aebe00, 0xc0008125e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3aebf40, 0xc0007b3770})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e65b50?, {0x3aebf40?, 0xc0007b3770?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3aebf40, 0xc0007b3770}, {0x3aebec0, 0xc0001128e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000968580?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 687
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    
x
+
TestErrorSpam/setup (207.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-658400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-658400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 --driver=hyperv: (3m27.4067161s)
error_spam_test.go:96: unexpected stderr: "W0604 21:43:23.744615    4224 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-658400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19024
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-658400" primary control-plane node in "nospam-658400" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-658400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0604 21:43:23.744615    4224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (207.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (36.08s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-235400 -n functional-235400
E0604 21:58:16.978155   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-235400 -n functional-235400: (12.9580464s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 logs -n 25: (9.1862129s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:48 UTC | 04 Jun 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:48 UTC | 04 Jun 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:48 UTC | 04 Jun 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:48 UTC | 04 Jun 24 21:48 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:48 UTC | 04 Jun 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:49 UTC | 04 Jun 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-658400 --log_dir                                     | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:49 UTC | 04 Jun 24 21:49 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-658400                                            | nospam-658400     | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:49 UTC | 04 Jun 24 21:50 UTC |
	| start   | -p functional-235400                                        | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:50 UTC | 04 Jun 24 21:54 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-235400                                        | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:54 UTC | 04 Jun 24 21:56 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache add                                 | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:56 UTC | 04 Jun 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache add                                 | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:56 UTC | 04 Jun 24 21:56 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache add                                 | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:56 UTC | 04 Jun 24 21:57 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache add                                 | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	|         | minikube-local-cache-test:functional-235400                 |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache delete                              | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	|         | minikube-local-cache-test:functional-235400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	| ssh     | functional-235400 ssh sudo                                  | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-235400                                           | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-235400 ssh                                       | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-235400 cache reload                              | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:57 UTC |
	| ssh     | functional-235400 ssh                                       | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:57 UTC | 04 Jun 24 21:58 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:58 UTC | 04 Jun 24 21:58 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:58 UTC | 04 Jun 24 21:58 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-235400 kubectl --                                | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:58 UTC | 04 Jun 24 21:58 UTC |
	|         | --context functional-235400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:54:22
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:54:22.556875    6964 out.go:291] Setting OutFile to fd 692 ...
	I0604 21:54:22.558088    6964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:54:22.558088    6964 out.go:304] Setting ErrFile to fd 816...
	I0604 21:54:22.558088    6964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:54:22.582406    6964 out.go:298] Setting JSON to false
	I0604 21:54:22.586922    6964 start.go:129] hostinfo: {"hostname":"minikube6","uptime":85312,"bootTime":1717452750,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 21:54:22.586922    6964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 21:54:22.591006    6964 out.go:177] * [functional-235400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 21:54:22.594801    6964 notify.go:220] Checking for updates...
	I0604 21:54:22.598531    6964 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:54:22.601600    6964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 21:54:22.604483    6964 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 21:54:22.607472    6964 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 21:54:22.609821    6964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 21:54:22.614387    6964 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:54:22.614740    6964 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:54:28.501045    6964 out.go:177] * Using the hyperv driver based on existing profile
	I0604 21:54:28.504012    6964 start.go:297] selected driver: hyperv
	I0604 21:54:28.504012    6964 start.go:901] validating driver "hyperv" against &{Name:functional-235400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-235400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.136.157 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:54:28.505326    6964 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 21:54:28.562463    6964 cni.go:84] Creating CNI manager for ""
	I0604 21:54:28.562463    6964 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:54:28.562463    6964 start.go:340] cluster config:
	{Name:functional-235400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-235400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.136.157 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:54:28.563222    6964 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:54:28.567972    6964 out.go:177] * Starting "functional-235400" primary control-plane node in "functional-235400" cluster
	I0604 21:54:28.571258    6964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:54:28.571258    6964 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 21:54:28.571258    6964 cache.go:56] Caching tarball of preloaded images
	I0604 21:54:28.571920    6964 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 21:54:28.572209    6964 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 21:54:28.572511    6964 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\config.json ...
	I0604 21:54:28.575210    6964 start.go:360] acquireMachinesLock for functional-235400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 21:54:28.575210    6964 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-235400"
	I0604 21:54:28.575948    6964 start.go:96] Skipping create...Using existing machine configuration
	I0604 21:54:28.575948    6964 fix.go:54] fixHost starting: 
	I0604 21:54:28.576135    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:31.673043    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:31.674227    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:31.674227    6964 fix.go:112] recreateIfNeeded on functional-235400: state=Running err=<nil>
	W0604 21:54:31.674358    6964 fix.go:138] unexpected machine state, will restart: <nil>
	I0604 21:54:31.680471    6964 out.go:177] * Updating the running hyperv "functional-235400" VM ...
	I0604 21:54:31.682913    6964 machine.go:94] provisionDockerMachine start ...
	I0604 21:54:31.682913    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:34.092082    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:34.092940    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:34.093084    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:54:36.938290    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:54:36.938290    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:36.944809    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:54:36.945443    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:54:36.945443    6964 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 21:54:37.085258    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-235400
	
	I0604 21:54:37.085258    6964 buildroot.go:166] provisioning hostname "functional-235400"
	I0604 21:54:37.085258    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:39.481494    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:39.481494    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:39.481494    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:54:42.303409    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:54:42.303482    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:42.306971    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:54:42.309973    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:54:42.310058    6964 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-235400 && echo "functional-235400" | sudo tee /etc/hostname
	I0604 21:54:42.482544    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-235400
	
	I0604 21:54:42.482544    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:44.839690    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:44.839690    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:44.839690    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:54:47.640457    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:54:47.640457    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:47.647215    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:54:47.647735    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:54:47.647929    6964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-235400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-235400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-235400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 21:54:47.795476    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 21:54:47.795529    6964 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 21:54:47.795529    6964 buildroot.go:174] setting up certificates
	I0604 21:54:47.795529    6964 provision.go:84] configureAuth start
	I0604 21:54:47.795529    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:50.238867    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:50.239267    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:50.239372    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:54:53.079823    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:54:53.079823    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:53.079823    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:54:55.464941    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:54:55.464941    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:55.464941    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:54:58.278798    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:54:58.278888    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:54:58.278888    6964 provision.go:143] copyHostCerts
	I0604 21:54:58.279126    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 21:54:58.279545    6964 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 21:54:58.279545    6964 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 21:54:58.279833    6964 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 21:54:58.281071    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 21:54:58.281573    6964 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 21:54:58.281573    6964 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 21:54:58.281816    6964 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 21:54:58.282906    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 21:54:58.283294    6964 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 21:54:58.283294    6964 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 21:54:58.283694    6964 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 21:54:58.284312    6964 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-235400 san=[127.0.0.1 172.20.136.157 functional-235400 localhost minikube]
	I0604 21:54:58.395736    6964 provision.go:177] copyRemoteCerts
	I0604 21:54:58.407698    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 21:54:58.407698    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:00.775992    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:00.775992    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:00.776989    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:03.628633    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:03.628819    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:03.628819    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:55:03.737202    6964 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3294614s)
	I0604 21:55:03.737202    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 21:55:03.737478    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 21:55:03.809753    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 21:55:03.810313    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0604 21:55:03.874029    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 21:55:03.874576    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 21:55:03.930314    6964 provision.go:87] duration metric: took 16.1346539s to configureAuth
	I0604 21:55:03.930314    6964 buildroot.go:189] setting minikube options for container-runtime
	I0604 21:55:03.931078    6964 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:55:03.931606    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:06.319029    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:06.319029    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:06.319029    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:09.158050    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:09.158646    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:09.164734    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:55:09.165136    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:55:09.165136    6964 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 21:55:09.316426    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 21:55:09.316426    6964 buildroot.go:70] root file system type: tmpfs
	I0604 21:55:09.316426    6964 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 21:55:09.316426    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:11.692066    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:11.692066    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:11.692204    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:14.526258    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:14.526258    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:14.533140    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:55:14.533897    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:55:14.534601    6964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 21:55:14.720481    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 21:55:14.720638    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:17.105359    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:17.105359    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:17.105939    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:19.975122    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:19.975917    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:19.981592    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:55:19.982258    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:55:19.982258    6964 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 21:55:20.140474    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 21:55:20.140474    6964 machine.go:97] duration metric: took 48.4571684s to provisionDockerMachine
	I0604 21:55:20.140474    6964 start.go:293] postStartSetup for "functional-235400" (driver="hyperv")
	I0604 21:55:20.140474    6964 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 21:55:20.154374    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 21:55:20.154374    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:22.534716    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:22.535039    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:22.535039    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:25.354013    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:25.354099    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:25.354500    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:55:25.477439    6964 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.3230219s)
	I0604 21:55:25.492532    6964 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 21:55:25.499675    6964 command_runner.go:130] > NAME=Buildroot
	I0604 21:55:25.499762    6964 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0604 21:55:25.499762    6964 command_runner.go:130] > ID=buildroot
	I0604 21:55:25.499762    6964 command_runner.go:130] > VERSION_ID=2023.02.9
	I0604 21:55:25.499852    6964 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0604 21:55:25.499892    6964 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 21:55:25.499892    6964 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 21:55:25.499892    6964 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 21:55:25.501252    6964 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 21:55:25.501252    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 21:55:25.502422    6964 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14064\hosts -> hosts in /etc/test/nested/copy/14064
	I0604 21:55:25.502422    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14064\hosts -> /etc/test/nested/copy/14064/hosts
	I0604 21:55:25.514494    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14064
	I0604 21:55:25.541837    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 21:55:25.594971    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14064\hosts --> /etc/test/nested/copy/14064/hosts (40 bytes)
	I0604 21:55:25.646830    6964 start.go:296] duration metric: took 5.5063116s for postStartSetup
	I0604 21:55:25.647285    6964 fix.go:56] duration metric: took 57.0706293s for fixHost
	I0604 21:55:25.647691    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:28.029247    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:28.029725    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:28.029725    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:30.870507    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:30.871214    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:30.876897    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:55:30.877676    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:55:30.877676    6964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 21:55:31.019212    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717538131.013157327
	
	I0604 21:55:31.019212    6964 fix.go:216] guest clock: 1717538131.013157327
	I0604 21:55:31.019212    6964 fix.go:229] Guest: 2024-06-04 21:55:31.013157327 +0000 UTC Remote: 2024-06-04 21:55:25.6474021 +0000 UTC m=+63.272942901 (delta=5.365755227s)
	I0604 21:55:31.019212    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:33.428504    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:33.428504    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:33.428923    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:36.305897    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:36.305897    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:36.311858    6964 main.go:141] libmachine: Using SSH client type: native
	I0604 21:55:36.312710    6964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.136.157 22 <nil> <nil>}
	I0604 21:55:36.312710    6964 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717538131
	I0604 21:55:36.496682    6964 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 21:55:31 UTC 2024
	
	I0604 21:55:36.496682    6964 fix.go:236] clock set: Tue Jun  4 21:55:31 UTC 2024
	 (err=<nil>)
	I0604 21:55:36.496682    6964 start.go:83] releasing machines lock for "functional-235400", held for 1m7.920922s
	I0604 21:55:36.497251    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:38.860597    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:38.860597    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:38.860597    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:41.734881    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:41.734881    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:41.740313    6964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 21:55:41.740440    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:41.750947    6964 ssh_runner.go:195] Run: cat /version.json
	I0604 21:55:41.750947    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:55:44.127858    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:44.127993    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:44.127993    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:44.156387    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:55:44.156655    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:44.156832    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:55:47.122003    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:47.122003    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:47.123166    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:55:47.149277    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:55:47.149277    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:55:47.150630    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:55:47.287064    6964 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0604 21:55:47.287183    6964 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5467058s)
	I0604 21:55:47.287183    6964 command_runner.go:130] > {"iso_version": "v1.33.1-1717518792-19024", "kicbase_version": "v0.0.44-1717064182-18993", "minikube_version": "v1.33.1", "commit": "8ad41152cc14078867a3ba7f5e3c263f5bd90a46"}
	I0604 21:55:47.287273    6964 ssh_runner.go:235] Completed: cat /version.json: (5.5362804s)
	I0604 21:55:47.300640    6964 ssh_runner.go:195] Run: systemctl --version
	I0604 21:55:47.311226    6964 command_runner.go:130] > systemd 252 (252)
	I0604 21:55:47.311226    6964 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0604 21:55:47.326040    6964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 21:55:47.334974    6964 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0604 21:55:47.335334    6964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 21:55:47.350947    6964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 21:55:47.383803    6964 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0604 21:55:47.383855    6964 start.go:494] detecting cgroup driver to use...
	I0604 21:55:47.384229    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 21:55:47.424429    6964 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0604 21:55:47.439920    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 21:55:47.477852    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 21:55:47.508211    6964 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 21:55:47.526186    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 21:55:47.571496    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:55:47.609684    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 21:55:47.645852    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 21:55:47.681902    6964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 21:55:47.717930    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 21:55:47.755661    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 21:55:47.807216    6964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 21:55:47.853267    6964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 21:55:47.877767    6964 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0604 21:55:47.891338    6964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 21:55:47.932442    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:55:48.232545    6964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 21:55:48.273492    6964 start.go:494] detecting cgroup driver to use...
	I0604 21:55:48.287671    6964 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 21:55:48.314377    6964 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0604 21:55:48.314586    6964 command_runner.go:130] > [Unit]
	I0604 21:55:48.314586    6964 command_runner.go:130] > Description=Docker Application Container Engine
	I0604 21:55:48.314643    6964 command_runner.go:130] > Documentation=https://docs.docker.com
	I0604 21:55:48.314643    6964 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0604 21:55:48.314643    6964 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0604 21:55:48.314643    6964 command_runner.go:130] > StartLimitBurst=3
	I0604 21:55:48.314643    6964 command_runner.go:130] > StartLimitIntervalSec=60
	I0604 21:55:48.314721    6964 command_runner.go:130] > [Service]
	I0604 21:55:48.314721    6964 command_runner.go:130] > Type=notify
	I0604 21:55:48.314721    6964 command_runner.go:130] > Restart=on-failure
	I0604 21:55:48.314721    6964 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0604 21:55:48.314721    6964 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0604 21:55:48.314791    6964 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0604 21:55:48.314791    6964 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0604 21:55:48.314791    6964 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0604 21:55:48.314791    6964 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0604 21:55:48.314852    6964 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0604 21:55:48.314852    6964 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0604 21:55:48.314914    6964 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0604 21:55:48.314961    6964 command_runner.go:130] > ExecStart=
	I0604 21:55:48.314961    6964 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0604 21:55:48.314961    6964 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0604 21:55:48.315019    6964 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0604 21:55:48.315019    6964 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0604 21:55:48.315019    6964 command_runner.go:130] > LimitNOFILE=infinity
	I0604 21:55:48.315081    6964 command_runner.go:130] > LimitNPROC=infinity
	I0604 21:55:48.315081    6964 command_runner.go:130] > LimitCORE=infinity
	I0604 21:55:48.315081    6964 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0604 21:55:48.315081    6964 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0604 21:55:48.315081    6964 command_runner.go:130] > TasksMax=infinity
	I0604 21:55:48.315081    6964 command_runner.go:130] > TimeoutStartSec=0
	I0604 21:55:48.315081    6964 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0604 21:55:48.315081    6964 command_runner.go:130] > Delegate=yes
	I0604 21:55:48.315081    6964 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0604 21:55:48.315081    6964 command_runner.go:130] > KillMode=process
	I0604 21:55:48.315081    6964 command_runner.go:130] > [Install]
	I0604 21:55:48.315081    6964 command_runner.go:130] > WantedBy=multi-user.target
	I0604 21:55:48.329065    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 21:55:48.378510    6964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 21:55:48.429160    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 21:55:48.478111    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 21:55:48.505398    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 21:55:48.550226    6964 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0604 21:55:48.563581    6964 ssh_runner.go:195] Run: which cri-dockerd
	I0604 21:55:48.572354    6964 command_runner.go:130] > /usr/bin/cri-dockerd
	I0604 21:55:48.585478    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 21:55:48.616162    6964 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 21:55:48.680343    6964 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 21:55:48.996496    6964 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 21:55:49.309524    6964 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 21:55:49.309524    6964 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 21:55:49.362586    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:55:49.697177    6964 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 21:56:02.677328    6964 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.97997s)
	I0604 21:56:02.691070    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 21:56:02.738827    6964 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0604 21:56:02.795027    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 21:56:02.838410    6964 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 21:56:03.082778    6964 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 21:56:03.304055    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:56:03.539588    6964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 21:56:03.586832    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 21:56:03.635755    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:56:03.880342    6964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 21:56:04.031660    6964 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 21:56:04.044194    6964 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 21:56:04.054388    6964 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0604 21:56:04.054388    6964 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0604 21:56:04.054388    6964 command_runner.go:130] > Device: 0,22	Inode: 1511        Links: 1
	I0604 21:56:04.054388    6964 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0604 21:56:04.054388    6964 command_runner.go:130] > Access: 2024-06-04 21:56:03.917722090 +0000
	I0604 21:56:04.054388    6964 command_runner.go:130] > Modify: 2024-06-04 21:56:03.917722090 +0000
	I0604 21:56:04.054388    6964 command_runner.go:130] > Change: 2024-06-04 21:56:03.922722451 +0000
	I0604 21:56:04.054388    6964 command_runner.go:130] >  Birth: -
	I0604 21:56:04.055760    6964 start.go:562] Will wait 60s for crictl version
	I0604 21:56:04.067856    6964 ssh_runner.go:195] Run: which crictl
	I0604 21:56:04.074699    6964 command_runner.go:130] > /usr/bin/crictl
	I0604 21:56:04.088153    6964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 21:56:04.146733    6964 command_runner.go:130] > Version:  0.1.0
	I0604 21:56:04.146812    6964 command_runner.go:130] > RuntimeName:  docker
	I0604 21:56:04.146812    6964 command_runner.go:130] > RuntimeVersion:  26.1.3
	I0604 21:56:04.146812    6964 command_runner.go:130] > RuntimeApiVersion:  v1
	I0604 21:56:04.146812    6964 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 21:56:04.156957    6964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 21:56:04.192533    6964 command_runner.go:130] > 26.1.3
	I0604 21:56:04.204573    6964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 21:56:04.241447    6964 command_runner.go:130] > 26.1.3
	I0604 21:56:04.246394    6964 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 21:56:04.246517    6964 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 21:56:04.255559    6964 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 21:56:04.255559    6964 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 21:56:04.255559    6964 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 21:56:04.255559    6964 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 21:56:04.258665    6964 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 21:56:04.258665    6964 ip.go:210] interface addr: 172.20.128.1/20
	I0604 21:56:04.272059    6964 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 21:56:04.278775    6964 command_runner.go:130] > 172.20.128.1	host.minikube.internal
	I0604 21:56:04.279678    6964 kubeadm.go:877] updating cluster {Name:functional-235400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-235400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.136.157 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 21:56:04.279678    6964 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:56:04.292618    6964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 21:56:04.320532    6964 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0604 21:56:04.320532    6964 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0604 21:56:04.321464    6964 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0604 21:56:04.321464    6964 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0604 21:56:04.321464    6964 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0604 21:56:04.321464    6964 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0604 21:56:04.321516    6964 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0604 21:56:04.321581    6964 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 21:56:04.321622    6964 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 21:56:04.321714    6964 docker.go:615] Images already preloaded, skipping extraction
	I0604 21:56:04.332471    6964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 21:56:04.361605    6964 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0604 21:56:04.361711    6964 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0604 21:56:04.361777    6964 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0604 21:56:04.361849    6964 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0604 21:56:04.361849    6964 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0604 21:56:04.361849    6964 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0604 21:56:04.361849    6964 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0604 21:56:04.361849    6964 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 21:56:04.361849    6964 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 21:56:04.361849    6964 cache_images.go:84] Images are preloaded, skipping loading
	I0604 21:56:04.361849    6964 kubeadm.go:928] updating node { 172.20.136.157 8441 v1.30.1 docker true true} ...
	I0604 21:56:04.362515    6964 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-235400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.136.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-235400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 21:56:04.375090    6964 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 21:56:04.417305    6964 command_runner.go:130] > cgroupfs
	I0604 21:56:04.419028    6964 cni.go:84] Creating CNI manager for ""
	I0604 21:56:04.419028    6964 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:56:04.419028    6964 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 21:56:04.419028    6964 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.136.157 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-235400 NodeName:functional-235400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.136.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.136.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 21:56:04.419028    6964 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.136.157
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-235400"
	  kubeletExtraArgs:
	    node-ip: 172.20.136.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.136.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 21:56:04.432159    6964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 21:56:04.455496    6964 command_runner.go:130] > kubeadm
	I0604 21:56:04.455559    6964 command_runner.go:130] > kubectl
	I0604 21:56:04.455559    6964 command_runner.go:130] > kubelet
	I0604 21:56:04.455609    6964 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 21:56:04.467506    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0604 21:56:04.486565    6964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0604 21:56:04.522483    6964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 21:56:04.558620    6964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0604 21:56:04.605710    6964 ssh_runner.go:195] Run: grep 172.20.136.157	control-plane.minikube.internal$ /etc/hosts
	I0604 21:56:04.613856    6964 command_runner.go:130] > 172.20.136.157	control-plane.minikube.internal
	I0604 21:56:04.626391    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:56:04.887756    6964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:56:04.940710    6964 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400 for IP: 172.20.136.157
	I0604 21:56:04.940710    6964 certs.go:194] generating shared ca certs ...
	I0604 21:56:04.940872    6964 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:56:04.941703    6964 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 21:56:04.942089    6964 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 21:56:04.942089    6964 certs.go:256] generating profile certs ...
	I0604 21:56:04.943181    6964 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.key
	I0604 21:56:04.943611    6964 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\apiserver.key.1c5338bb
	I0604 21:56:04.943611    6964 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\proxy-client.key
	I0604 21:56:04.944168    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 21:56:04.944366    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 21:56:04.944626    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 21:56:04.944867    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 21:56:04.945208    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 21:56:04.945435    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 21:56:04.945664    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 21:56:04.945966    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 21:56:04.946303    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 21:56:04.947292    6964 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 21:56:04.947437    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 21:56:04.947840    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 21:56:04.948357    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 21:56:04.948660    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 21:56:04.949493    6964 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 21:56:04.949933    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 21:56:04.949933    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:56:04.949933    6964 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 21:56:04.951613    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 21:56:05.024443    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 21:56:05.086645    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 21:56:05.138624    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 21:56:05.202070    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0604 21:56:05.259332    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 21:56:05.321571    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 21:56:05.392620    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0604 21:56:05.458144    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 21:56:05.527791    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 21:56:05.587999    6964 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 21:56:05.653369    6964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 21:56:05.714620    6964 ssh_runner.go:195] Run: openssl version
	I0604 21:56:05.733347    6964 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0604 21:56:05.747586    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 21:56:05.791869    6964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 21:56:05.798494    6964 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 21:56:05.798494    6964 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 21:56:05.812646    6964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 21:56:05.831593    6964 command_runner.go:130] > 3ec20f2e
	I0604 21:56:05.844599    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 21:56:05.921051    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 21:56:05.971728    6964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:56:05.985855    6964 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:56:05.985855    6964 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:56:05.997805    6964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 21:56:06.017633    6964 command_runner.go:130] > b5213941
	I0604 21:56:06.041745    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 21:56:06.089384    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 21:56:06.154355    6964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 21:56:06.166087    6964 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 21:56:06.166219    6964 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 21:56:06.182644    6964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 21:56:06.197334    6964 command_runner.go:130] > 51391683
	I0604 21:56:06.210335    6964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 21:56:06.250144    6964 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 21:56:06.258411    6964 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 21:56:06.258411    6964 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0604 21:56:06.258411    6964 command_runner.go:130] > Device: 8,1	Inode: 9431378     Links: 1
	I0604 21:56:06.258411    6964 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0604 21:56:06.258411    6964 command_runner.go:130] > Access: 2024-06-04 21:53:12.527558783 +0000
	I0604 21:56:06.258411    6964 command_runner.go:130] > Modify: 2024-06-04 21:53:12.527558783 +0000
	I0604 21:56:06.258411    6964 command_runner.go:130] > Change: 2024-06-04 21:53:12.527558783 +0000
	I0604 21:56:06.258411    6964 command_runner.go:130] >  Birth: 2024-06-04 21:53:12.527558783 +0000
	I0604 21:56:06.274529    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0604 21:56:06.283770    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.300058    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0604 21:56:06.310672    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.322466    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0604 21:56:06.334387    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.349562    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0604 21:56:06.364279    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.376364    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0604 21:56:06.391052    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.404824    6964 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0604 21:56:06.416867    6964 command_runner.go:130] > Certificate will not expire
	I0604 21:56:06.416867    6964 kubeadm.go:391] StartCluster: {Name:functional-235400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-235400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.136.157 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:56:06.425862    6964 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 21:56:06.514129    6964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 21:56:06.538907    6964 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0604 21:56:06.539362    6964 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0604 21:56:06.539423    6964 command_runner.go:130] > /var/lib/minikube/etcd:
	I0604 21:56:06.539423    6964 command_runner.go:130] > member
	W0604 21:56:06.539997    6964 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0604 21:56:06.540029    6964 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0604 21:56:06.540029    6964 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0604 21:56:06.555689    6964 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0604 21:56:06.581266    6964 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0604 21:56:06.582631    6964 kubeconfig.go:125] found "functional-235400" server: "https://172.20.136.157:8441"
	I0604 21:56:06.583079    6964 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:56:06.584936    6964 kapi.go:59] client config for functional-235400: &rest.Config{Host:"https://172.20.136.157:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-235400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-235400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 21:56:06.587064    6964 cert_rotation.go:137] Starting client certificate rotation controller
	I0604 21:56:06.599436    6964 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0604 21:56:06.622223    6964 kubeadm.go:624] The running cluster does not require reconfiguration: 172.20.136.157
	I0604 21:56:06.622223    6964 kubeadm.go:1154] stopping kube-system containers ...
	I0604 21:56:06.631917    6964 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 21:56:06.697324    6964 command_runner.go:130] > 142d77d5d2ae
	I0604 21:56:06.697324    6964 command_runner.go:130] > 794f6dbbd6fa
	I0604 21:56:06.697324    6964 command_runner.go:130] > 52b42c3ade19
	I0604 21:56:06.697324    6964 command_runner.go:130] > 06a86e247a38
	I0604 21:56:06.697324    6964 command_runner.go:130] > a5f43eec3aaf
	I0604 21:56:06.697324    6964 command_runner.go:130] > de0dc0288636
	I0604 21:56:06.697324    6964 command_runner.go:130] > 98aaee861666
	I0604 21:56:06.697324    6964 command_runner.go:130] > 8d9ec6c383fd
	I0604 21:56:06.697324    6964 command_runner.go:130] > 9efb527eaa1b
	I0604 21:56:06.697324    6964 command_runner.go:130] > 7a46261538f5
	I0604 21:56:06.697324    6964 command_runner.go:130] > 8715ad83e441
	I0604 21:56:06.697324    6964 command_runner.go:130] > 614516231f09
	I0604 21:56:06.697324    6964 command_runner.go:130] > 6659a6826986
	I0604 21:56:06.697324    6964 command_runner.go:130] > dc9cd8c128be
	I0604 21:56:06.697324    6964 command_runner.go:130] > a813a93db7d0
	I0604 21:56:06.697324    6964 command_runner.go:130] > 4a79633a19fb
	I0604 21:56:06.697324    6964 command_runner.go:130] > 7c6a4ceda7e3
	I0604 21:56:06.697324    6964 command_runner.go:130] > cf7d0bec69f9
	I0604 21:56:06.697324    6964 command_runner.go:130] > 3498600109f8
	I0604 21:56:06.697324    6964 command_runner.go:130] > 7b9f0ecc69fb
	I0604 21:56:06.697324    6964 docker.go:483] Stopping containers: [142d77d5d2ae 794f6dbbd6fa 52b42c3ade19 06a86e247a38 a5f43eec3aaf de0dc0288636 98aaee861666 8d9ec6c383fd 9efb527eaa1b 7a46261538f5 8715ad83e441 614516231f09 6659a6826986 dc9cd8c128be a813a93db7d0 4a79633a19fb 7c6a4ceda7e3 cf7d0bec69f9 3498600109f8 7b9f0ecc69fb]
	I0604 21:56:06.708672    6964 ssh_runner.go:195] Run: docker stop 142d77d5d2ae 794f6dbbd6fa 52b42c3ade19 06a86e247a38 a5f43eec3aaf de0dc0288636 98aaee861666 8d9ec6c383fd 9efb527eaa1b 7a46261538f5 8715ad83e441 614516231f09 6659a6826986 dc9cd8c128be a813a93db7d0 4a79633a19fb 7c6a4ceda7e3 cf7d0bec69f9 3498600109f8 7b9f0ecc69fb
	I0604 21:56:07.574345    6964 command_runner.go:130] > 142d77d5d2ae
	I0604 21:56:07.575056    6964 command_runner.go:130] > 794f6dbbd6fa
	I0604 21:56:07.575056    6964 command_runner.go:130] > 52b42c3ade19
	I0604 21:56:07.575056    6964 command_runner.go:130] > 06a86e247a38
	I0604 21:56:07.575056    6964 command_runner.go:130] > a5f43eec3aaf
	I0604 21:56:07.575056    6964 command_runner.go:130] > de0dc0288636
	I0604 21:56:07.575056    6964 command_runner.go:130] > 98aaee861666
	I0604 21:56:07.575056    6964 command_runner.go:130] > 8d9ec6c383fd
	I0604 21:56:07.575056    6964 command_runner.go:130] > 9efb527eaa1b
	I0604 21:56:07.575056    6964 command_runner.go:130] > 7a46261538f5
	I0604 21:56:07.575056    6964 command_runner.go:130] > 8715ad83e441
	I0604 21:56:07.575056    6964 command_runner.go:130] > 614516231f09
	I0604 21:56:07.575056    6964 command_runner.go:130] > 6659a6826986
	I0604 21:56:07.575056    6964 command_runner.go:130] > dc9cd8c128be
	I0604 21:56:07.575056    6964 command_runner.go:130] > a813a93db7d0
	I0604 21:56:07.575056    6964 command_runner.go:130] > 4a79633a19fb
	I0604 21:56:07.575056    6964 command_runner.go:130] > 7c6a4ceda7e3
	I0604 21:56:07.575056    6964 command_runner.go:130] > cf7d0bec69f9
	I0604 21:56:07.575056    6964 command_runner.go:130] > 3498600109f8
	I0604 21:56:07.575056    6964 command_runner.go:130] > 7b9f0ecc69fb
	I0604 21:56:07.593074    6964 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0604 21:56:07.666220    6964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 21:56:07.688294    6964 command_runner.go:130] > -rw------- 1 root root 5651 Jun  4 21:53 /etc/kubernetes/admin.conf
	I0604 21:56:07.688294    6964 command_runner.go:130] > -rw------- 1 root root 5658 Jun  4 21:53 /etc/kubernetes/controller-manager.conf
	I0604 21:56:07.688294    6964 command_runner.go:130] > -rw------- 1 root root 2007 Jun  4 21:53 /etc/kubernetes/kubelet.conf
	I0604 21:56:07.688294    6964 command_runner.go:130] > -rw------- 1 root root 5602 Jun  4 21:53 /etc/kubernetes/scheduler.conf
	I0604 21:56:07.688294    6964 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Jun  4 21:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jun  4 21:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun  4 21:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jun  4 21:53 /etc/kubernetes/scheduler.conf
	
	I0604 21:56:07.703614    6964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0604 21:56:07.727589    6964 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0604 21:56:07.743740    6964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0604 21:56:07.764053    6964 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0604 21:56:07.779335    6964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0604 21:56:07.800745    6964 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0604 21:56:07.816972    6964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 21:56:07.857576    6964 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0604 21:56:07.920524    6964 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0604 21:56:07.934786    6964 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 21:56:07.977020    6964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 21:56:07.999564    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:08.164629    6964 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0604 21:56:08.165239    6964 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0604 21:56:08.165369    6964 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0604 21:56:08.165407    6964 command_runner.go:130] > [certs] Using the existing "sa" key
	I0604 21:56:08.165407    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:09.404157    6964 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 21:56:09.404235    6964 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0604 21:56:09.404235    6964 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0604 21:56:09.404235    6964 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0604 21:56:09.404235    6964 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 21:56:09.404235    6964 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 21:56:09.404235    6964 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2388172s)
	I0604 21:56:09.404235    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:09.517903    6964 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 21:56:09.519910    6964 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 21:56:09.519910    6964 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0604 21:56:09.796503    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:09.906674    6964 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 21:56:09.906674    6964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 21:56:09.906674    6964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 21:56:09.906674    6964 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 21:56:09.906674    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:10.079196    6964 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 21:56:10.081200    6964 api_server.go:52] waiting for apiserver process to appear ...
	I0604 21:56:10.095456    6964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:56:10.602735    6964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:56:11.094349    6964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:56:11.601674    6964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:56:11.632733    6964 command_runner.go:130] > 5558
	I0604 21:56:11.632733    6964 api_server.go:72] duration metric: took 1.5515202s to wait for apiserver process to appear ...
	I0604 21:56:11.632733    6964 api_server.go:88] waiting for apiserver healthz status ...
	I0604 21:56:11.632733    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:14.592397    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0604 21:56:14.592397    6964 api_server.go:103] status: https://172.20.136.157:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0604 21:56:14.592397    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:14.644969    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0604 21:56:14.645044    6964 api_server.go:103] status: https://172.20.136.157:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0604 21:56:14.645044    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:14.670830    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0604 21:56:14.670830    6964 api_server.go:103] status: https://172.20.136.157:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0604 21:56:15.146531    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:15.163582    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0604 21:56:15.163648    6964 api_server.go:103] status: https://172.20.136.157:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0604 21:56:15.640011    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:15.663232    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0604 21:56:15.663300    6964 api_server.go:103] status: https://172.20.136.157:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0604 21:56:16.148564    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:16.183362    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 200:
	ok
	I0604 21:56:16.184075    6964 round_trippers.go:463] GET https://172.20.136.157:8441/version
	I0604 21:56:16.184142    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:16.184142    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:16.184142    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:16.207583    6964 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0604 21:56:16.207583    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:16.207583    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:16.207583    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:16.207583    6964 round_trippers.go:580]     Content-Length: 263
	I0604 21:56:16.208164    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:16 GMT
	I0604 21:56:16.208164    6964 round_trippers.go:580]     Audit-Id: ec95ff82-fa56-4c0f-bdb7-678d78ae4a1d
	I0604 21:56:16.208164    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:16.208164    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:16.208236    6964 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0604 21:56:16.208407    6964 api_server.go:141] control plane version: v1.30.1
	I0604 21:56:16.208479    6964 api_server.go:131] duration metric: took 4.5757083s to wait for apiserver health ...
	I0604 21:56:16.208479    6964 cni.go:84] Creating CNI manager for ""
	I0604 21:56:16.208536    6964 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:56:16.211330    6964 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0604 21:56:16.224844    6964 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0604 21:56:16.255579    6964 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0604 21:56:16.334849    6964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 21:56:16.334849    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:16.334849    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:16.334849    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:16.334849    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:16.355831    6964 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0604 21:56:16.355831    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:16.356191    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:16.356191    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:16.356191    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:16.356191    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:16 GMT
	I0604 21:56:16.356191    6964 round_trippers.go:580]     Audit-Id: 39d89be3-e1bf-432c-831a-5c6c759a7a7a
	I0604 21:56:16.356191    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:16.357479    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"553","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51564 chars]
	I0604 21:56:16.361837    6964 system_pods.go:59] 7 kube-system pods found
	I0604 21:56:16.362857    6964 system_pods.go:61] "coredns-7db6d8ff4d-gfcww" [8ce65d0a-5c28-4a96-a273-1c7987dcffb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0604 21:56:16.362857    6964 system_pods.go:61] "etcd-functional-235400" [a92a3741-ab9e-4216-96a0-e52e897928b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0604 21:56:16.362857    6964 system_pods.go:61] "kube-apiserver-functional-235400" [97c5c050-f0df-4404-8f46-6471f9c83e91] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0604 21:56:16.362857    6964 system_pods.go:61] "kube-controller-manager-functional-235400" [e62d7208-9bc7-486a-b79e-c4f2ca54e84b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0604 21:56:16.362857    6964 system_pods.go:61] "kube-proxy-2xs47" [144b79dc-c192-4e05-a481-2047f1a943c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0604 21:56:16.362857    6964 system_pods.go:61] "kube-scheduler-functional-235400" [6d9b2d8e-c3f2-452b-9d86-c16e21c60e69] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0604 21:56:16.362857    6964 system_pods.go:61] "storage-provisioner" [787187d8-02e6-447b-a0a1-dc664d9226e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0604 21:56:16.362857    6964 system_pods.go:74] duration metric: took 28.0077ms to wait for pod list to return data ...
	I0604 21:56:16.362857    6964 node_conditions.go:102] verifying NodePressure condition ...
	I0604 21:56:16.362857    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes
	I0604 21:56:16.362857    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:16.362857    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:16.362857    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:16.367834    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:16.368001    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:16.368001    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:16.368001    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:16.368001    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:16.368001    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:16.368001    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:16 GMT
	I0604 21:56:16.368001    6964 round_trippers.go:580]     Audit-Id: bef3ae2d-4f7f-433c-b0ea-f163b5cc8c77
	I0604 21:56:16.368784    6964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"554"},"items":[{"metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0604 21:56:16.370138    6964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 21:56:16.370138    6964 node_conditions.go:123] node cpu capacity is 2
	I0604 21:56:16.370138    6964 node_conditions.go:105] duration metric: took 7.2808ms to run NodePressure ...
	I0604 21:56:16.370138    6964 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0604 21:56:17.124063    6964 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0604 21:56:17.124063    6964 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0604 21:56:17.124210    6964 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0604 21:56:17.124309    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0604 21:56:17.124309    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.124309    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.124309    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.131679    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:17.131679    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.131679    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.131679    6964 round_trippers.go:580]     Audit-Id: 4eecaed6-2b2e-4fce-81ba-1675035572f0
	I0604 21:56:17.131679    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.131679    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.131679    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.131679    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.132672    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30984 chars]
	I0604 21:56:17.133740    6964 kubeadm.go:733] kubelet initialised
	I0604 21:56:17.133740    6964 kubeadm.go:734] duration metric: took 9.5302ms waiting for restarted kubelet to initialise ...
	I0604 21:56:17.133740    6964 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:56:17.133740    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:17.134686    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.134686    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.134686    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.136674    6964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 21:56:17.136674    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.136674    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.136674    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.136674    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.136674    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.136674    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.136674    6964 round_trippers.go:580]     Audit-Id: 5cfcd84f-2774-41c7-bba3-ffac8edc48c8
	I0604 21:56:17.142077    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"553","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51164 chars]
	I0604 21:56:17.144405    6964 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:17.144405    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:17.144405    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.144405    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.144405    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.147965    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:17.147965    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.147965    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.147965    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.147965    6964 round_trippers.go:580]     Audit-Id: fdc55328-8179-40d2-b2c5-48665c67f947
	I0604 21:56:17.147965    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.147965    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.147965    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.147965    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"553","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0604 21:56:17.148974    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:17.148974    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.148974    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.148974    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.151973    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:17.152802    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.152802    6964 round_trippers.go:580]     Audit-Id: 85cddeba-4019-40e0-b9c9-2540ef890f1d
	I0604 21:56:17.152802    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.152802    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.152802    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.152802    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.152802    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.153091    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:17.647060    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:17.647060    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.647060    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.647060    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.651619    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:17.651755    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.651755    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.651755    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.651755    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.651755    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.651755    6964 round_trippers.go:580]     Audit-Id: b80711f1-f266-4db3-87fa-3c6adab5c30f
	I0604 21:56:17.651755    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.651951    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"553","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0604 21:56:17.652654    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:17.652729    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:17.652729    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:17.652729    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:17.655521    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:17.655521    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:17.655521    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:17.656520    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:17.656520    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:17.656520    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:17 GMT
	I0604 21:56:17.656575    6964 round_trippers.go:580]     Audit-Id: ebd46c66-9880-4d0f-bd49-12cc45c96104
	I0604 21:56:17.656575    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:17.656721    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:18.149006    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:18.149289    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:18.149289    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:18.149289    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:18.157122    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:18.157122    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:18.157122    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:18 GMT
	I0604 21:56:18.157122    6964 round_trippers.go:580]     Audit-Id: 19b6e230-4fba-4f96-8c55-decee571aa4f
	I0604 21:56:18.157122    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:18.157122    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:18.157122    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:18.157122    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:18.157849    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"560","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0604 21:56:18.157849    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:18.157849    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:18.157849    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:18.157849    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:18.174076    6964 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 21:56:18.174119    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:18.174119    6964 round_trippers.go:580]     Audit-Id: 16f5c8e0-26e1-4c43-b823-f88b5697c719
	I0604 21:56:18.174119    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:18.174119    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:18.174119    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:18.174119    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:18.174119    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:18 GMT
	I0604 21:56:18.174395    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:18.652278    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:18.652383    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:18.652383    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:18.652383    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:18.656404    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:18.656404    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:18.656497    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:18.656497    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:18.656497    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:18.656497    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:18 GMT
	I0604 21:56:18.656497    6964 round_trippers.go:580]     Audit-Id: e5dfb7b3-492d-46ac-8637-fcf502f53da1
	I0604 21:56:18.656497    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:18.656699    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"560","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0604 21:56:18.657510    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:18.657587    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:18.657587    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:18.657587    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:18.660781    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:18.660781    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:18.660781    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:18.660781    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:18.660781    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:18.661032    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:18.661032    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:18 GMT
	I0604 21:56:18.661032    6964 round_trippers.go:580]     Audit-Id: 73707279-afff-444a-ad23-2785f747244f
	I0604 21:56:18.661301    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:19.152197    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:19.152197    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:19.152197    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:19.152197    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:19.156817    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:19.157430    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:19.157430    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:19 GMT
	I0604 21:56:19.157430    6964 round_trippers.go:580]     Audit-Id: 1206b1e9-1fd5-42bf-aa28-f5f97031a0a8
	I0604 21:56:19.157430    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:19.157430    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:19.157430    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:19.157430    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:19.157518    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"560","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0604 21:56:19.158361    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:19.158441    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:19.158441    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:19.158441    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:19.160926    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:19.160926    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:19.161675    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:19.161675    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:19 GMT
	I0604 21:56:19.161675    6964 round_trippers.go:580]     Audit-Id: f2923c88-58ce-42f1-b193-09b84552d1f4
	I0604 21:56:19.161675    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:19.161675    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:19.161675    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:19.162017    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:19.162428    6964 pod_ready.go:102] pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace has status "Ready":"False"
	I0604 21:56:19.656891    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:19.656972    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:19.656972    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:19.656972    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:19.661703    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:19.662024    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:19.662304    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:19.662336    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:19.662336    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:19.662336    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:19.662336    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:19 GMT
	I0604 21:56:19.662336    6964 round_trippers.go:580]     Audit-Id: 63e75053-be1f-4de0-9bf1-55254240cdcb
	I0604 21:56:19.662781    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"560","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0604 21:56:19.663081    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:19.663081    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:19.663634    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:19.663634    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:19.676668    6964 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0604 21:56:19.676668    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:19.676668    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:19.676668    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:19.676668    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:19 GMT
	I0604 21:56:19.676668    6964 round_trippers.go:580]     Audit-Id: fb20d61e-8275-4ca4-b2fe-c2c31c72b9b7
	I0604 21:56:19.676668    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:19.676668    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:19.677098    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:20.159401    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:20.159548    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.159735    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.159735    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.163320    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:20.163320    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.163320    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.163320    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.163320    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.163320    6964 round_trippers.go:580]     Audit-Id: f6f4b19e-a17c-4aac-8981-a739fc7849a9
	I0604 21:56:20.163320    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.163320    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.164266    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"560","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0604 21:56:20.165485    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:20.165485    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.165580    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.165580    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.169436    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:20.170184    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.170184    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.170248    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.170248    6964 round_trippers.go:580]     Audit-Id: 03c18878-3687-4558-8bd8-9bc7059db8b3
	I0604 21:56:20.170248    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.170248    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.170248    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.170248    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:20.645574    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:20.645574    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.645766    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.645766    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.650245    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:20.650468    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.650468    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.650468    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.650468    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.650468    6964 round_trippers.go:580]     Audit-Id: fb7987ba-3572-46ee-82f6-405ea22f3584
	I0604 21:56:20.650468    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.650468    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.650657    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"611","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0604 21:56:20.651259    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:20.651259    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.651259    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.651259    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.657429    6964 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 21:56:20.657429    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.657429    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.657429    6964 round_trippers.go:580]     Audit-Id: ea6f444b-be92-46a8-b664-a85acaf6cb09
	I0604 21:56:20.657429    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.657429    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.657429    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.657429    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.657429    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:20.658198    6964 pod_ready.go:92] pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:20.658198    6964 pod_ready.go:81] duration metric: took 3.5137639s for pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:20.658198    6964 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:20.658198    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:20.658198    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.658198    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.658198    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.660837    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:20.660837    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.660837    6964 round_trippers.go:580]     Audit-Id: 570783d4-0b54-46f0-8316-7ce9b50dee6d
	I0604 21:56:20.660837    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.660837    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.660837    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.660837    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.660837    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.661545    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:20.662059    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:20.662059    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:20.662059    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:20.662059    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:20.664620    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:20.664620    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:20.664620    6964 round_trippers.go:580]     Audit-Id: aeb9b673-98ba-4dde-9951-84f0d4ba7d96
	I0604 21:56:20.664620    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:20.664620    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:20.664620    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:20.664620    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:20.664620    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:20 GMT
	I0604 21:56:20.664620    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:21.159452    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:21.159452    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:21.159452    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:21.159564    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:21.164045    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:21.164139    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:21.164139    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:21.164139    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:21.164139    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:21 GMT
	I0604 21:56:21.164139    6964 round_trippers.go:580]     Audit-Id: 626a014d-89ab-40d1-869d-1da78279ed5a
	I0604 21:56:21.164139    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:21.164139    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:21.164527    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:21.165117    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:21.165117    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:21.165117    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:21.165117    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:21.168543    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:21.169112    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:21.169112    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:21 GMT
	I0604 21:56:21.169112    6964 round_trippers.go:580]     Audit-Id: bb76fa17-1475-4cca-b3b4-cff1bfbc602f
	I0604 21:56:21.169112    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:21.169112    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:21.169112    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:21.169112    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:21.169372    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:21.673378    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:21.673378    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:21.673378    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:21.673378    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:21.677973    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:21.677973    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:21.677973    6964 round_trippers.go:580]     Audit-Id: 0710563f-ec5b-47b7-89d4-4c7ba90c334c
	I0604 21:56:21.678331    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:21.678331    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:21.678331    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:21.678331    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:21.678331    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:21 GMT
	I0604 21:56:21.678965    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:21.679962    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:21.679962    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:21.679962    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:21.679962    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:21.683008    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:21.683152    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:21.683152    6964 round_trippers.go:580]     Audit-Id: d00c8af4-699d-4dba-b656-38c66b50a816
	I0604 21:56:21.683152    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:21.683152    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:21.683152    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:21.683228    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:21.683228    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:21 GMT
	I0604 21:56:21.683295    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:22.161321    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:22.161417    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:22.161417    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:22.161480    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:22.165405    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:22.166390    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:22.166390    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:22.166390    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:22.166475    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:22.166475    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:22 GMT
	I0604 21:56:22.166475    6964 round_trippers.go:580]     Audit-Id: 72646dbb-22e3-4fdb-bc36-ae45e3fba098
	I0604 21:56:22.166475    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:22.166701    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:22.167443    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:22.167443    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:22.167443    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:22.167443    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:22.170645    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:22.170645    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:22.170645    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:22.170645    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:22 GMT
	I0604 21:56:22.170715    6964 round_trippers.go:580]     Audit-Id: f4f26b8e-4431-4f73-92ab-9b385ab8a1c9
	I0604 21:56:22.170715    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:22.170715    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:22.170715    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:22.171014    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:22.662701    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:22.662701    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:22.662701    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:22.662701    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:22.667320    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:22.667320    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:22.667320    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:22.667320    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:22.667320    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:22.667320    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:22.667320    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:22 GMT
	I0604 21:56:22.667720    6964 round_trippers.go:580]     Audit-Id: da1c8de3-dd05-44ab-a095-a351dd412489
	I0604 21:56:22.667870    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:22.668880    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:22.668959    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:22.668959    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:22.668959    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:22.671329    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:22.671329    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:22.671329    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:22 GMT
	I0604 21:56:22.671329    6964 round_trippers.go:580]     Audit-Id: 9574ebec-e93f-4ff7-a660-c6d28a87ffcb
	I0604 21:56:22.671329    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:22.671329    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:22.671329    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:22.671890    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:22.672271    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:22.672813    6964 pod_ready.go:102] pod "etcd-functional-235400" in "kube-system" namespace has status "Ready":"False"
	I0604 21:56:23.165542    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:23.165542    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:23.165542    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:23.165542    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:23.169145    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:23.170190    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:23.170190    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:23.170190    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:23 GMT
	I0604 21:56:23.170190    6964 round_trippers.go:580]     Audit-Id: 7685d9e5-bb2a-47b4-b25a-571364283ab5
	I0604 21:56:23.170190    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:23.170190    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:23.170190    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:23.170400    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:23.170727    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:23.170727    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:23.170727    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:23.170727    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:23.177937    6964 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 21:56:23.177987    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:23.177987    6964 round_trippers.go:580]     Audit-Id: f7607c5e-87be-4354-9119-747f8bfe22bc
	I0604 21:56:23.178188    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:23.178188    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:23.178188    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:23.178188    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:23.178188    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:23 GMT
	I0604 21:56:23.178719    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:23.667536    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:23.667741    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:23.667741    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:23.667741    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:23.671091    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:23.671091    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:23.671091    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:23.671091    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:23.671695    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:23 GMT
	I0604 21:56:23.671695    6964 round_trippers.go:580]     Audit-Id: 2e1cc39e-1fe9-4e80-9eec-68d127ef41dd
	I0604 21:56:23.671695    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:23.671695    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:23.671890    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"547","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6604 chars]
	I0604 21:56:23.672341    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:23.672341    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:23.672341    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:23.672341    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:23.675391    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:23.675391    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:23.675391    6964 round_trippers.go:580]     Audit-Id: 8739526e-be19-475b-a98b-1741c98eada4
	I0604 21:56:23.675547    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:23.675547    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:23.675547    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:23.675547    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:23.675547    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:23 GMT
	I0604 21:56:23.675884    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:24.166877    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:24.166984    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.166984    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.166984    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.171397    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:24.172095    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.172095    6964 round_trippers.go:580]     Audit-Id: d54942f9-0815-4d84-af15-76cd441825e7
	I0604 21:56:24.172172    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.172172    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.172172    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.172172    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.172172    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.172758    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"614","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0604 21:56:24.173531    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:24.173531    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.173531    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.173531    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.177272    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:24.177272    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.177272    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.177272    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.177272    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.177272    6964 round_trippers.go:580]     Audit-Id: 01a3e890-1b4e-43f8-8b78-64ef72c36427
	I0604 21:56:24.177272    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.177272    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.177936    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:24.178472    6964 pod_ready.go:92] pod "etcd-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:24.178536    6964 pod_ready.go:81] duration metric: took 3.5203088s for pod "etcd-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:24.178536    6964 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:24.178602    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:24.178697    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.178697    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.178697    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.181393    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:24.181393    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.181393    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.181393    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.181393    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.181393    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.181393    6964 round_trippers.go:580]     Audit-Id: f14a89f7-9a56-49d3-b7de-0c7195c37116
	I0604 21:56:24.181393    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.182274    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"549","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0604 21:56:24.183005    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:24.183160    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.183160    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.183160    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.186214    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:24.186214    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.186214    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.187031    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.187031    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.187031    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.187031    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.187031    6964 round_trippers.go:580]     Audit-Id: 7a9f103b-9e06-4993-8cfa-0bed6b6c5d25
	I0604 21:56:24.187162    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:24.682014    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:24.682014    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.682014    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.682014    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.685593    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:24.685593    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.685593    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.686508    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.686508    6964 round_trippers.go:580]     Audit-Id: ac964289-2745-4a8d-a812-01881dddba98
	I0604 21:56:24.686508    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.686508    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.686508    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.687729    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"549","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0604 21:56:24.687991    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:24.687991    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:24.687991    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:24.687991    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:24.691800    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:24.692004    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:24.692004    6964 round_trippers.go:580]     Audit-Id: 16b30a58-3752-4aa3-90d7-31485b8dce2a
	I0604 21:56:24.692004    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:24.692080    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:24.692080    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:24.692080    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:24.692080    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:24 GMT
	I0604 21:56:24.692080    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:25.181117    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:25.181117    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:25.181117    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:25.181117    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:25.186203    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:25.186267    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:25.186267    6964 round_trippers.go:580]     Audit-Id: a3eba4c5-fd2e-4bb9-a7e1-d23122508965
	I0604 21:56:25.186267    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:25.186267    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:25.186267    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:25.186267    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:25.186267    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:25 GMT
	I0604 21:56:25.186267    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"549","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0604 21:56:25.187023    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:25.187550    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:25.187634    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:25.187634    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:25.190562    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:25.190562    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:25.190562    6964 round_trippers.go:580]     Audit-Id: add979ce-fb41-4fe4-82b4-798b374240cc
	I0604 21:56:25.190562    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:25.190562    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:25.190562    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:25.190562    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:25.190562    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:25 GMT
	I0604 21:56:25.191239    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:25.682366    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:25.682456    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:25.682456    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:25.682456    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:25.686443    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:25.687414    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:25.687414    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:25.687414    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:25.687414    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:25.687414    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:25 GMT
	I0604 21:56:25.687414    6964 round_trippers.go:580]     Audit-Id: e24ad0bb-e0fd-4d1f-83bd-4546ffabfb32
	I0604 21:56:25.687489    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:25.687747    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"549","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0604 21:56:25.688331    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:25.688331    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:25.688331    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:25.688331    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:25.690946    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:25.690946    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:25.690946    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:25.690946    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:25.690946    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:25.690946    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:25.690946    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:25 GMT
	I0604 21:56:25.690946    6964 round_trippers.go:580]     Audit-Id: bdac7d94-006b-4368-a09a-b26ff830723f
	I0604 21:56:25.691498    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.181048    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:26.181161    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.181161    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.181161    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.185659    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:26.185909    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.185909    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.185909    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.185909    6964 round_trippers.go:580]     Audit-Id: 5fee5dbb-b62c-401c-a29d-5362dd6c7c85
	I0604 21:56:26.185909    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.185909    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.185909    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.186224    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"620","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0604 21:56:26.187085    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.187193    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.187193    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.187193    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.191443    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:26.191443    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.191506    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.191525    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.191525    6964 round_trippers.go:580]     Audit-Id: 7aff261c-fdc3-4b38-a550-d0d4b8830b79
	I0604 21:56:26.191525    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.191525    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.191525    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.191743    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.191743    6964 pod_ready.go:92] pod "kube-apiserver-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.192278    6964 pod_ready.go:81] duration metric: took 2.0137254s for pod "kube-apiserver-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.192278    6964 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.192405    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-235400
	I0604 21:56:26.192405    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.192405    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.192405    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.195332    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:26.195332    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.195332    6964 round_trippers.go:580]     Audit-Id: c4d34d92-0ccc-452f-af8c-92c9d559d6aa
	I0604 21:56:26.195332    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.195332    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.195718    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.195718    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.195718    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.196027    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-235400","namespace":"kube-system","uid":"e62d7208-9bc7-486a-b79e-c4f2ca54e84b","resourceVersion":"616","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c21cbb8fec9519fcb1d5344ff7de18f7","kubernetes.io/config.mirror":"c21cbb8fec9519fcb1d5344ff7de18f7","kubernetes.io/config.seen":"2024-06-04T21:53:26.215215071Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0604 21:56:26.196203    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.196203    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.196203    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.196203    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.198792    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:26.198792    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.198792    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.198792    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.198792    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.198792    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.198792    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.198792    6964 round_trippers.go:580]     Audit-Id: 6cd222e9-4325-4f47-8d53-548e7d24ef2c
	I0604 21:56:26.200804    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.201748    6964 pod_ready.go:92] pod "kube-controller-manager-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.201748    6964 pod_ready.go:81] duration metric: took 9.4701ms for pod "kube-controller-manager-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.201748    6964 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2xs47" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.201748    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2xs47
	I0604 21:56:26.201748    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.201748    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.201748    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.212314    6964 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 21:56:26.212816    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.212816    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.212816    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.212816    6964 round_trippers.go:580]     Audit-Id: 630d3859-ba74-42cb-9445-b5f57c1b648a
	I0604 21:56:26.212816    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.212816    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.212816    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.213012    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2xs47","generateName":"kube-proxy-","namespace":"kube-system","uid":"144b79dc-c192-4e05-a481-2047f1a943c9","resourceVersion":"557","creationTimestamp":"2024-06-04T21:53:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ff7b329f-e3eb-469e-b3d9-58eb3a5dd994","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff7b329f-e3eb-469e-b3d9-58eb3a5dd994\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0604 21:56:26.213328    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.213328    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.213328    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.213328    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.216322    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:26.216322    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.216814    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.216814    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.216814    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.216814    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.216814    6964 round_trippers.go:580]     Audit-Id: 872286a8-e87b-4f9b-9e0f-8f6254435ef6
	I0604 21:56:26.216814    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.217610    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.217610    6964 pod_ready.go:92] pod "kube-proxy-2xs47" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.217610    6964 pod_ready.go:81] duration metric: took 15.8617ms for pod "kube-proxy-2xs47" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.217610    6964 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.217610    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-235400
	I0604 21:56:26.217610    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.217610    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.217610    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.220278    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:26.220779    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.220779    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.220779    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.220779    6964 round_trippers.go:580]     Audit-Id: 6ed6bbd5-1679-4e08-a901-2c041e46ee94
	I0604 21:56:26.220779    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.220779    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.220846    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.220951    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-235400","namespace":"kube-system","uid":"6d9b2d8e-c3f2-452b-9d86-c16e21c60e69","resourceVersion":"613","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0483b6e339171810d67647ce15760b58","kubernetes.io/config.mirror":"0483b6e339171810d67647ce15760b58","kubernetes.io/config.seen":"2024-06-04T21:53:26.215216171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0604 21:56:26.220951    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.220951    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.220951    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.220951    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.223522    6964 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 21:56:26.223522    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.223522    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.223522    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.223522    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.223522    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.223522    6964 round_trippers.go:580]     Audit-Id: 52fc3d16-1239-44f0-bc89-066b39652230
	I0604 21:56:26.223522    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.224019    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.224019    6964 pod_ready.go:92] pod "kube-scheduler-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.224019    6964 pod_ready.go:81] duration metric: took 6.4098ms for pod "kube-scheduler-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.224019    6964 pod_ready.go:38] duration metric: took 9.0902048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:56:26.224019    6964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 21:56:26.244592    6964 command_runner.go:130] > -16
	I0604 21:56:26.245227    6964 ops.go:34] apiserver oom_adj: -16
	I0604 21:56:26.245227    6964 kubeadm.go:591] duration metric: took 19.7050359s to restartPrimaryControlPlane
	I0604 21:56:26.245227    6964 kubeadm.go:393] duration metric: took 19.8281969s to StartCluster
	I0604 21:56:26.245414    6964 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:56:26.245636    6964 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:56:26.246793    6964 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:56:26.248872    6964 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.136.157 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 21:56:26.248872    6964 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0604 21:56:26.252773    6964 out.go:177] * Verifying Kubernetes components...
	I0604 21:56:26.249073    6964 addons.go:69] Setting storage-provisioner=true in profile "functional-235400"
	I0604 21:56:26.249073    6964 addons.go:69] Setting default-storageclass=true in profile "functional-235400"
	I0604 21:56:26.249373    6964 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 21:56:26.255045    6964 addons.go:234] Setting addon storage-provisioner=true in "functional-235400"
	W0604 21:56:26.255045    6964 addons.go:243] addon storage-provisioner should already be in state true
	I0604 21:56:26.255045    6964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-235400"
	I0604 21:56:26.255045    6964 host.go:66] Checking if "functional-235400" exists ...
	I0604 21:56:26.255973    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:56:26.255973    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:56:26.269976    6964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 21:56:26.620116    6964 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 21:56:26.649074    6964 node_ready.go:35] waiting up to 6m0s for node "functional-235400" to be "Ready" ...
	I0604 21:56:26.649074    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.649074    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.649074    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.649074    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.653092    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:26.653092    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.653475    6964 round_trippers.go:580]     Audit-Id: dce48c79-c47a-432b-8ac5-2c80dbdf14ae
	I0604 21:56:26.653475    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.653475    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.653475    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.653562    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.653562    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.654070    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.654401    6964 node_ready.go:49] node "functional-235400" has status "Ready":"True"
	I0604 21:56:26.654401    6964 node_ready.go:38] duration metric: took 5.3269ms for node "functional-235400" to be "Ready" ...
	I0604 21:56:26.654401    6964 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:56:26.654930    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:26.654970    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.655023    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.655023    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.659740    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:26.659878    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.659878    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.659878    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.659878    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.659878    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.659959    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.659959    6964 round_trippers.go:580]     Audit-Id: 9d5c9c4f-0a84-4b56-a144-ac6a0febd4fc
	I0604 21:56:26.661338    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"621"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"611","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50137 chars]
	I0604 21:56:26.664284    6964 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.664284    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gfcww
	I0604 21:56:26.664284    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.664284    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.664284    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.667430    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:26.667430    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.667430    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.667430    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.667430    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.667430    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.667430    6964 round_trippers.go:580]     Audit-Id: 22dbe710-f316-4931-96e9-44eb11a61a80
	I0604 21:56:26.667430    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.668093    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"611","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0604 21:56:26.668901    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.668901    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.668901    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.668901    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.671903    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:26.671903    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.671903    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.672481    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.672481    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.672555    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.672555    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.672555    6964 round_trippers.go:580]     Audit-Id: 1618b856-5535-449e-919c-453404d806a2
	I0604 21:56:26.672829    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.673626    6964 pod_ready.go:92] pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.673767    6964 pod_ready.go:81] duration metric: took 9.4829ms for pod "coredns-7db6d8ff4d-gfcww" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.673767    6964 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.781489    6964 request.go:629] Waited for 107.589ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:26.781726    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/etcd-functional-235400
	I0604 21:56:26.781788    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.781788    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.781854    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.787168    6964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 21:56:26.787168    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.787168    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.787168    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.787168    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.787168    6964 round_trippers.go:580]     Audit-Id: dc15f836-9f79-43b7-aa53-327719762853
	I0604 21:56:26.787168    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.787168    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.787553    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-235400","namespace":"kube-system","uid":"a92a3741-ab9e-4216-96a0-e52e897928b6","resourceVersion":"614","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.136.157:2379","kubernetes.io/config.hash":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.mirror":"ac62a29b768ac90672d20af88ce87818","kubernetes.io/config.seen":"2024-06-04T21:53:26.215208271Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6380 chars]
	I0604 21:56:26.988914    6964 request.go:629] Waited for 200.6124ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.989024    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:26.989024    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:26.989024    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:26.989024    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:26.996236    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:26.996236    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:26.996358    6964 round_trippers.go:580]     Audit-Id: 7d4c8fcb-ebf2-4c5a-b492-2e27c9a0c122
	I0604 21:56:26.996358    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:26.996358    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:26.996408    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:26.996408    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:26.996408    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:26 GMT
	I0604 21:56:26.996454    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:26.997315    6964 pod_ready.go:92] pod "etcd-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:26.997315    6964 pod_ready.go:81] duration metric: took 323.5452ms for pod "etcd-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:26.997315    6964 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:27.197039    6964 request.go:629] Waited for 199.3488ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:27.197039    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-235400
	I0604 21:56:27.197039    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:27.197039    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:27.197039    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:27.202217    6964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 21:56:27.202687    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:27.202687    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:27.202687    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:27.202687    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:27.202687    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:27 GMT
	I0604 21:56:27.202687    6964 round_trippers.go:580]     Audit-Id: 83c6453b-ee92-4813-88fe-47bb7bf4bd7a
	I0604 21:56:27.202687    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:27.203084    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-235400","namespace":"kube-system","uid":"97c5c050-f0df-4404-8f46-6471f9c83e91","resourceVersion":"620","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.136.157:8441","kubernetes.io/config.hash":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.mirror":"7e809a072d1b36cbd8dbd9fdfa02b35b","kubernetes.io/config.seen":"2024-06-04T21:53:26.215213671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0604 21:56:27.387021    6964 request.go:629] Waited for 183.1277ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:27.387374    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:27.387409    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:27.387476    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:27.387573    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:27.392415    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:27.392415    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:27.392415    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:27.392415    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:27.392415    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:27.392415    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:27.392415    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:27 GMT
	I0604 21:56:27.392415    6964 round_trippers.go:580]     Audit-Id: f7e35035-fc57-41d9-ad35-d663943b764f
	I0604 21:56:27.392415    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:27.393449    6964 pod_ready.go:92] pod "kube-apiserver-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:27.393449    6964 pod_ready.go:81] duration metric: took 396.1309ms for pod "kube-apiserver-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:27.393449    6964 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:27.593673    6964 request.go:629] Waited for 200.2221ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-235400
	I0604 21:56:27.593853    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-235400
	I0604 21:56:27.593957    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:27.593957    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:27.593992    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:27.597497    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:27.597699    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:27.597699    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:27.597788    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:27.597788    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:27 GMT
	I0604 21:56:27.597788    6964 round_trippers.go:580]     Audit-Id: 1f450a5b-eab8-415a-8505-36d1ae0326af
	I0604 21:56:27.597788    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:27.597788    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:27.598364    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-235400","namespace":"kube-system","uid":"e62d7208-9bc7-486a-b79e-c4f2ca54e84b","resourceVersion":"616","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c21cbb8fec9519fcb1d5344ff7de18f7","kubernetes.io/config.mirror":"c21cbb8fec9519fcb1d5344ff7de18f7","kubernetes.io/config.seen":"2024-06-04T21:53:26.215215071Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0604 21:56:27.782675    6964 request.go:629] Waited for 183.3696ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:27.782675    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:27.782891    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:27.782891    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:27.782891    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:27.790442    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:27.790442    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:27.790442    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:27.790442    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:27.790442    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:27.790442    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:27.790442    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:27 GMT
	I0604 21:56:27.790442    6964 round_trippers.go:580]     Audit-Id: 8e8102a2-4284-4a65-bc92-16c6188fa830
	I0604 21:56:27.791034    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:27.791496    6964 pod_ready.go:92] pod "kube-controller-manager-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:27.791616    6964 pod_ready.go:81] duration metric: took 398.1644ms for pod "kube-controller-manager-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:27.791616    6964 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2xs47" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:27.986525    6964 request.go:629] Waited for 194.23ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2xs47
	I0604 21:56:27.986525    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-proxy-2xs47
	I0604 21:56:27.986525    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:27.986525    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:27.986525    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:27.990761    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:27.990761    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:27.990761    6964 round_trippers.go:580]     Audit-Id: f7c1bed8-c891-4f5e-b7c1-fc4bb5761032
	I0604 21:56:27.990761    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:27.990761    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:27.990761    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:27.990761    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:27.990761    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:27 GMT
	I0604 21:56:27.990761    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2xs47","generateName":"kube-proxy-","namespace":"kube-system","uid":"144b79dc-c192-4e05-a481-2047f1a943c9","resourceVersion":"557","creationTimestamp":"2024-06-04T21:53:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ff7b329f-e3eb-469e-b3d9-58eb3a5dd994","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff7b329f-e3eb-469e-b3d9-58eb3a5dd994\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0604 21:56:28.191138    6964 request.go:629] Waited for 199.2403ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:28.191138    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:28.191394    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.191394    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.191452    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:28.198987    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:28.199555    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:28.199555    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:28 GMT
	I0604 21:56:28.199555    6964 round_trippers.go:580]     Audit-Id: 2a7e997e-c79c-4497-953d-a54818df37f4
	I0604 21:56:28.199555    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:28.199555    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:28.199555    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:28.199619    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:28.199777    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:28.200324    6964 pod_ready.go:92] pod "kube-proxy-2xs47" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:28.200324    6964 pod_ready.go:81] duration metric: took 408.6126ms for pod "kube-proxy-2xs47" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:28.200560    6964 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:28.381134    6964 request.go:629] Waited for 180.3034ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-235400
	I0604 21:56:28.381360    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-235400
	I0604 21:56:28.381360    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.381360    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.381479    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:28.385603    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:28.385975    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:28.385975    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:28.385975    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:28.386077    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:28.386077    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:28.386077    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:28 GMT
	I0604 21:56:28.386123    6964 round_trippers.go:580]     Audit-Id: 6968f536-7228-4f47-ba89-d8a542919c14
	I0604 21:56:28.386420    6964 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-235400","namespace":"kube-system","uid":"6d9b2d8e-c3f2-452b-9d86-c16e21c60e69","resourceVersion":"613","creationTimestamp":"2024-06-04T21:53:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0483b6e339171810d67647ce15760b58","kubernetes.io/config.mirror":"0483b6e339171810d67647ce15760b58","kubernetes.io/config.seen":"2024-06-04T21:53:26.215216171Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0604 21:56:28.586748    6964 request.go:629] Waited for 199.0454ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:28.586852    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes/functional-235400
	I0604 21:56:28.586852    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.586852    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.586852    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:28.590280    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:28.590280    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:28.590280    6964 round_trippers.go:580]     Audit-Id: a1321afa-2ebe-4870-b2fe-5d4b3a7ad257
	I0604 21:56:28.590280    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:28.590280    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:28.590280    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:28.590280    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:28.590280    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:28 GMT
	I0604 21:56:28.591800    6964 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-06-04T21:53:22Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0604 21:56:28.592015    6964 pod_ready.go:92] pod "kube-scheduler-functional-235400" in "kube-system" namespace has status "Ready":"True"
	I0604 21:56:28.592015    6964 pod_ready.go:81] duration metric: took 391.4517ms for pod "kube-scheduler-functional-235400" in "kube-system" namespace to be "Ready" ...
	I0604 21:56:28.592015    6964 pod_ready.go:38] duration metric: took 1.9375989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 21:56:28.592015    6964 api_server.go:52] waiting for apiserver process to appear ...
	I0604 21:56:28.609108    6964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 21:56:28.643964    6964 command_runner.go:130] > 5558
	I0604 21:56:28.644364    6964 api_server.go:72] duration metric: took 2.3953956s to wait for apiserver process to appear ...
	I0604 21:56:28.644364    6964 api_server.go:88] waiting for apiserver healthz status ...
	I0604 21:56:28.644364    6964 api_server.go:253] Checking apiserver healthz at https://172.20.136.157:8441/healthz ...
	I0604 21:56:28.655662    6964 api_server.go:279] https://172.20.136.157:8441/healthz returned 200:
	ok
	I0604 21:56:28.655851    6964 round_trippers.go:463] GET https://172.20.136.157:8441/version
	I0604 21:56:28.655910    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.655910    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.655910    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:28.657539    6964 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 21:56:28.657539    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:28.657539    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:28.657539    6964 round_trippers.go:580]     Content-Length: 263
	I0604 21:56:28.657539    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:28 GMT
	I0604 21:56:28.657539    6964 round_trippers.go:580]     Audit-Id: 2e7b994c-9e1d-46c3-a3ed-5e897140fa35
	I0604 21:56:28.657539    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:28.657539    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:28.657539    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:28.657539    6964 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0604 21:56:28.657539    6964 api_server.go:141] control plane version: v1.30.1
	I0604 21:56:28.657539    6964 api_server.go:131] duration metric: took 13.1744ms to wait for apiserver health ...
	I0604 21:56:28.657539    6964 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 21:56:28.667354    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:56:28.667354    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:28.668030    6964 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:56:28.668030    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:56:28.668557    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:28.671996    6964 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 21:56:28.668963    6964 kapi.go:59] client config for functional-235400: &rest.Config{Host:"https://172.20.136.157:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-235400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-235400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 21:56:28.673273    6964 addons.go:234] Setting addon default-storageclass=true in "functional-235400"
	W0604 21:56:28.674762    6964 addons.go:243] addon default-storageclass should already be in state true
	I0604 21:56:28.674762    6964 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:56:28.674762    6964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 21:56:28.674762    6964 host.go:66] Checking if "functional-235400" exists ...
	I0604 21:56:28.674762    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:56:28.675875    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:56:28.790616    6964 request.go:629] Waited for 132.9077ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:28.790779    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:28.790779    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.790872    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.790872    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:28.796528    6964 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 21:56:28.796831    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:28.796831    6964 round_trippers.go:580]     Audit-Id: 78ccd12b-1675-483c-a1c9-9ae7216096f6
	I0604 21:56:28.796831    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:28.796831    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:28.796831    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:28.796831    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:28.796921    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:28 GMT
	I0604 21:56:28.797976    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"611","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50137 chars]
	I0604 21:56:28.800303    6964 system_pods.go:59] 7 kube-system pods found
	I0604 21:56:28.800303    6964 system_pods.go:61] "coredns-7db6d8ff4d-gfcww" [8ce65d0a-5c28-4a96-a273-1c7987dcffb1] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "etcd-functional-235400" [a92a3741-ab9e-4216-96a0-e52e897928b6] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "kube-apiserver-functional-235400" [97c5c050-f0df-4404-8f46-6471f9c83e91] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "kube-controller-manager-functional-235400" [e62d7208-9bc7-486a-b79e-c4f2ca54e84b] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "kube-proxy-2xs47" [144b79dc-c192-4e05-a481-2047f1a943c9] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "kube-scheduler-functional-235400" [6d9b2d8e-c3f2-452b-9d86-c16e21c60e69] Running
	I0604 21:56:28.800303    6964 system_pods.go:61] "storage-provisioner" [787187d8-02e6-447b-a0a1-dc664d9226e5] Running
	I0604 21:56:28.800303    6964 system_pods.go:74] duration metric: took 142.7627ms to wait for pod list to return data ...
	I0604 21:56:28.800303    6964 default_sa.go:34] waiting for default service account to be created ...
	I0604 21:56:28.996278    6964 request.go:629] Waited for 195.7316ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/default/serviceaccounts
	I0604 21:56:28.996403    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/default/serviceaccounts
	I0604 21:56:28.996403    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:28.996526    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:28.996526    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:29.001658    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:29.001658    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:29.001658    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:29 GMT
	I0604 21:56:29.001658    6964 round_trippers.go:580]     Audit-Id: 7455a3f3-d03e-4bb0-b142-af9ca9dc04e6
	I0604 21:56:29.001658    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:29.001658    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:29.001658    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:29.001658    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:29.001658    6964 round_trippers.go:580]     Content-Length: 261
	I0604 21:56:29.001776    6964 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9aee0c15-fa67-4a3c-a3a1-194bcae7101d","resourceVersion":"345","creationTimestamp":"2024-06-04T21:53:39Z"}}]}
	I0604 21:56:29.002091    6964 default_sa.go:45] found service account: "default"
	I0604 21:56:29.002183    6964 default_sa.go:55] duration metric: took 201.879ms for default service account to be created ...
	I0604 21:56:29.002183    6964 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 21:56:29.186867    6964 request.go:629] Waited for 184.4401ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:29.187092    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/namespaces/kube-system/pods
	I0604 21:56:29.187092    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:29.187215    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:29.187248    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:29.194993    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:29.194993    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:29.195327    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:29.195327    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:29 GMT
	I0604 21:56:29.195327    6964 round_trippers.go:580]     Audit-Id: bbbc6752-99ce-4425-8096-cb10f4276938
	I0604 21:56:29.195327    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:29.195327    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:29.195327    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:29.198842    6964 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-gfcww","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"8ce65d0a-5c28-4a96-a273-1c7987dcffb1","resourceVersion":"611","creationTimestamp":"2024-06-04T21:53:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60a50cda-051b-43ee-9570-5034455df473","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T21:53:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60a50cda-051b-43ee-9570-5034455df473\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50137 chars]
	I0604 21:56:29.204846    6964 system_pods.go:86] 7 kube-system pods found
	I0604 21:56:29.204846    6964 system_pods.go:89] "coredns-7db6d8ff4d-gfcww" [8ce65d0a-5c28-4a96-a273-1c7987dcffb1] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "etcd-functional-235400" [a92a3741-ab9e-4216-96a0-e52e897928b6] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "kube-apiserver-functional-235400" [97c5c050-f0df-4404-8f46-6471f9c83e91] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "kube-controller-manager-functional-235400" [e62d7208-9bc7-486a-b79e-c4f2ca54e84b] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "kube-proxy-2xs47" [144b79dc-c192-4e05-a481-2047f1a943c9] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "kube-scheduler-functional-235400" [6d9b2d8e-c3f2-452b-9d86-c16e21c60e69] Running
	I0604 21:56:29.204846    6964 system_pods.go:89] "storage-provisioner" [787187d8-02e6-447b-a0a1-dc664d9226e5] Running
	I0604 21:56:29.204846    6964 system_pods.go:126] duration metric: took 202.6606ms to wait for k8s-apps to be running ...
	I0604 21:56:29.204846    6964 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 21:56:29.218832    6964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 21:56:29.247972    6964 system_svc.go:56] duration metric: took 43.1257ms WaitForService to wait for kubelet
	I0604 21:56:29.248154    6964 kubeadm.go:576] duration metric: took 2.9991804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 21:56:29.248154    6964 node_conditions.go:102] verifying NodePressure condition ...
	I0604 21:56:29.394726    6964 request.go:629] Waited for 146.3576ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.136.157:8441/api/v1/nodes
	I0604 21:56:29.394951    6964 round_trippers.go:463] GET https://172.20.136.157:8441/api/v1/nodes
	I0604 21:56:29.394951    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:29.394951    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:29.394951    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:29.398892    6964 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 21:56:29.399250    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:29.399250    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:29.399250    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:29 GMT
	I0604 21:56:29.399250    6964 round_trippers.go:580]     Audit-Id: bdf94109-246d-47d9-9db6-c3167e6ae042
	I0604 21:56:29.399250    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:29.399250    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:29.399250    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:29.399552    6964 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"functional-235400","uid":"a943ebb7-2ab5-4491-ae79-8c602d01cd21","resourceVersion":"546","creationTimestamp":"2024-06-04T21:53:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-235400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"functional-235400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T21_53_26_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0604 21:56:29.400242    6964 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 21:56:29.400242    6964 node_conditions.go:123] node cpu capacity is 2
	I0604 21:56:29.400353    6964 node_conditions.go:105] duration metric: took 152.1977ms to run NodePressure ...
	I0604 21:56:29.400353    6964 start.go:240] waiting for startup goroutines ...
	I0604 21:56:31.127159    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:56:31.127159    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:31.127159    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:56:31.127684    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:31.127159    6964 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 21:56:31.127684    6964 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 21:56:31.127684    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:56:31.127684    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
	I0604 21:56:33.550685    6964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 21:56:33.550685    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:33.550685    6964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
	I0604 21:56:34.007823    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:56:34.007997    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:34.007997    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:56:34.162288    6964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 21:56:35.086704    6964 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0604 21:56:35.086793    6964 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0604 21:56:35.086793    6964 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0604 21:56:35.086793    6964 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0604 21:56:35.086793    6964 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0604 21:56:35.086874    6964 command_runner.go:130] > pod/storage-provisioner configured
	I0604 21:56:36.350489    6964 main.go:141] libmachine: [stdout =====>] : 172.20.136.157
	
	I0604 21:56:36.350562    6964 main.go:141] libmachine: [stderr =====>] : 
	I0604 21:56:36.351032    6964 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
	I0604 21:56:36.498196    6964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 21:56:36.682838    6964 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0604 21:56:36.682838    6964 round_trippers.go:463] GET https://172.20.136.157:8441/apis/storage.k8s.io/v1/storageclasses
	I0604 21:56:36.682838    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:36.682838    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:36.682838    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:36.686992    6964 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 21:56:36.686992    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:36.687606    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:36.687606    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:36.687606    6964 round_trippers.go:580]     Content-Length: 1273
	I0604 21:56:36.687606    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:36 GMT
	I0604 21:56:36.687697    6964 round_trippers.go:580]     Audit-Id: 6319e214-5daf-463e-a38e-84a83be2900e
	I0604 21:56:36.687697    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:36.687697    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:36.687761    6964 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"632"},"items":[{"metadata":{"name":"standard","uid":"458bc719-2bde-4823-997d-e07223385d16","resourceVersion":"433","creationTimestamp":"2024-06-04T21:53:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T21:53:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0604 21:56:36.689320    6964 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"458bc719-2bde-4823-997d-e07223385d16","resourceVersion":"433","creationTimestamp":"2024-06-04T21:53:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T21:53:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 21:56:36.689502    6964 round_trippers.go:463] PUT https://172.20.136.157:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0604 21:56:36.689585    6964 round_trippers.go:469] Request Headers:
	I0604 21:56:36.689585    6964 round_trippers.go:473]     Accept: application/json, */*
	I0604 21:56:36.689630    6964 round_trippers.go:473]     Content-Type: application/json
	I0604 21:56:36.689630    6964 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 21:56:36.697144    6964 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 21:56:36.697144    6964 round_trippers.go:577] Response Headers:
	I0604 21:56:36.697144    6964 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 559e51f8-94bd-4b67-be51-fd991bc8927a
	I0604 21:56:36.697144    6964 round_trippers.go:580]     Content-Length: 1220
	I0604 21:56:36.697144    6964 round_trippers.go:580]     Date: Tue, 04 Jun 2024 21:56:36 GMT
	I0604 21:56:36.697144    6964 round_trippers.go:580]     Audit-Id: 4bc34f7e-3850-43a0-aa94-b9dcb67aae41
	I0604 21:56:36.697144    6964 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 21:56:36.697144    6964 round_trippers.go:580]     Content-Type: application/json
	I0604 21:56:36.697144    6964 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bc67a822-ecf2-4193-ad88-87cb0ce26a44
	I0604 21:56:36.697144    6964 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"458bc719-2bde-4823-997d-e07223385d16","resourceVersion":"433","creationTimestamp":"2024-06-04T21:53:50Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T21:53:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 21:56:36.701428    6964 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0604 21:56:36.704433    6964 addons.go:510] duration metric: took 10.4556152s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0604 21:56:36.704433    6964 start.go:245] waiting for cluster config update ...
	I0604 21:56:36.704433    6964 start.go:254] writing updated cluster config ...
	I0604 21:56:36.718450    6964 ssh_runner.go:195] Run: rm -f paused
	I0604 21:56:36.864796    6964 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 21:56:36.869112    6964 out.go:177] * Done! kubectl is now configured to use "functional-235400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.663498662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.666164520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.686834067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.686991270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.687076672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.687409879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.720398192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.720634097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.720738300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 dockerd[4267]: time="2024-06-04T21:56:15.721187709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:15 functional-235400 cri-dockerd[4494]: time="2024-06-04T21:56:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6ce90600d64ca0936ce19132bae3fe0657e91e8e0b4940abd79b926ddda5741/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 21:56:15 functional-235400 cri-dockerd[4494]: time="2024-06-04T21:56:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/def8cfb49af0cae8fbc8c3a8557bdda257e14a71e97f906bf23657d362a9af49/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.089218928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.089564134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.089593535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.089793539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.133658369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.133785471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.133801271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.134099077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:16 functional-235400 cri-dockerd[4494]: time="2024-06-04T21:56:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ae3d30b2b147fd66f28c87d8d9e50e8c16713b87ef9c33fe4c5501d68fae405/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.775597363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.775936971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.775954971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 21:56:16 functional-235400 dockerd[4267]: time="2024-06-04T21:56:16.776262178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	64dfe654439c6       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   8ae3d30b2b147       coredns-7db6d8ff4d-gfcww
	8c5e6908fc8d7       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   def8cfb49af0c       storage-provisioner
	c4d701ba89b73       747097150317f       2 minutes ago       Running             kube-proxy                1                   a6ce90600d64c       kube-proxy-2xs47
	2e58df51fa22c       91be940803172       2 minutes ago       Running             kube-apiserver            1                   75330288ad45e       kube-apiserver-functional-235400
	befd4f1880a79       25a1387cdab82       2 minutes ago       Running             kube-controller-manager   2                   271bf1a8d56bb       kube-controller-manager-functional-235400
	7aef2f692351f       a52dc94f0a912       2 minutes ago       Running             kube-scheduler            2                   0fc454c7fd9c8       kube-scheduler-functional-235400
	2569b5b0ea5e2       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   036064b9ce753       etcd-functional-235400
	142d77d5d2ae2       25a1387cdab82       2 minutes ago       Exited              kube-controller-manager   1                   794f6dbbd6fa4       kube-controller-manager-functional-235400
	52b42c3ade19b       a52dc94f0a912       2 minutes ago       Exited              kube-scheduler            1                   a5f43eec3aaf0       kube-scheduler-functional-235400
	06a86e247a382       3861cfcd7c04c       2 minutes ago       Exited              etcd                      1                   de0dc02886364       etcd-functional-235400
	98aaee8616661       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   8d9ec6c383fd3       storage-provisioner
	9efb527eaa1ba       747097150317f       4 minutes ago       Exited              kube-proxy                0                   8715ad83e4414       kube-proxy-2xs47
	7a46261538f5f       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   614516231f09f       coredns-7db6d8ff4d-gfcww
	4a79633a19fb5       91be940803172       5 minutes ago       Exited              kube-apiserver            0                   7b9f0ecc69fbd       kube-apiserver-functional-235400
	
	
	==> coredns [64dfe654439c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 803c34e0384461ff6c3adbfda26136947e8f17f40be36badaa8609fa8af244ef6267c673c044efba02c9fcb4c4b158d9cab1e1fc67ace50c9a88568a50ddb1ae
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50343 - 30076 "HINFO IN 3151026781623203693.107846607416558095. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01751245s
	
	
	==> coredns [7a46261538f5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[685219899]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Jun-2024 21:53:41.856) (total time: 29960ms):
	Trace[685219899]: ---"Objects listed" error:<nil> 29960ms (21:54:11.817)
	Trace[685219899]: [29.960397157s] [29.960397157s] END
	[INFO] plugin/kubernetes: Trace[499848904]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Jun-2024 21:53:41.857) (total time: 29960ms):
	Trace[499848904]: ---"Objects listed" error:<nil> 29960ms (21:54:11.817)
	Trace[499848904]: [29.96032613s] [29.96032613s] END
	[INFO] plugin/kubernetes: Trace[2145700108]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Jun-2024 21:53:41.857) (total time: 29960ms):
	Trace[2145700108]: ---"Objects listed" error:<nil> 29960ms (21:54:11.817)
	Trace[2145700108]: [29.960223223s] [29.960223223s] END
	[INFO] plugin/reload: Running configuration SHA512 = 803c34e0384461ff6c3adbfda26136947e8f17f40be36badaa8609fa8af244ef6267c673c044efba02c9fcb4c4b158d9cab1e1fc67ace50c9a88568a50ddb1ae
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59313 - 8681 "HINFO IN 2388569144609019772.7191360249515230750. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.221110607s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-235400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-235400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=functional-235400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T21_53_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 21:53:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-235400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 21:58:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 21:58:16 +0000   Tue, 04 Jun 2024 21:53:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 21:58:16 +0000   Tue, 04 Jun 2024 21:53:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 21:58:16 +0000   Tue, 04 Jun 2024 21:53:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 21:58:16 +0000   Tue, 04 Jun 2024 21:53:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.136.157
	  Hostname:    functional-235400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4a554f0e159488b88f57cdcbc229bbb
	  System UUID:                9083dc34-8652-944e-a2a3-305268f03f95
	  Boot ID:                    178e00ff-f9b9-4e99-bf2d-0c7bee75e78f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gfcww                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m52s
	  kube-system                 etcd-functional-235400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-apiserver-functional-235400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-controller-manager-functional-235400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-2xs47                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-scheduler-functional-235400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m15s (x7 over 5m15s)  kubelet          Node functional-235400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m15s (x8 over 5m15s)  kubelet          Node functional-235400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m15s (x8 over 5m15s)  kubelet          Node functional-235400 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m6s (x2 over 5m6s)    kubelet          Node functional-235400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x2 over 5m6s)    kubelet          Node functional-235400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x2 over 5m6s)    kubelet          Node functional-235400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m3s                   kubelet          Node functional-235400 status is now: NodeReady
	  Normal  RegisteredNode           4m54s                  node-controller  Node functional-235400 event: Registered Node functional-235400 in Controller
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node functional-235400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node functional-235400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node functional-235400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                   node-controller  Node functional-235400 event: Registered Node functional-235400 in Controller
	
	
	==> dmesg <==
	[  +0.851655] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[  +7.685243] systemd-fstab-generator[1727]: Ignoring "noauto" option for root device
	[  +0.120258] kauditd_printk_skb: 51 callbacks suppressed
	[  +9.043810] systemd-fstab-generator[2122]: Ignoring "noauto" option for root device
	[  +0.155860] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.891611] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.237355] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.758295] kauditd_printk_skb: 88 callbacks suppressed
	[Jun 4 21:54] kauditd_printk_skb: 10 callbacks suppressed
	[Jun 4 21:55] systemd-fstab-generator[3782]: Ignoring "noauto" option for root device
	[  +0.756503] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.320864] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +0.381057] systemd-fstab-generator[3844]: Ignoring "noauto" option for root device
	[  +5.321826] kauditd_printk_skb: 89 callbacks suppressed
	[Jun 4 21:56] systemd-fstab-generator[4443]: Ignoring "noauto" option for root device
	[  +0.234970] systemd-fstab-generator[4455]: Ignoring "noauto" option for root device
	[  +0.212480] systemd-fstab-generator[4467]: Ignoring "noauto" option for root device
	[  +0.347974] systemd-fstab-generator[4482]: Ignoring "noauto" option for root device
	[  +0.995055] systemd-fstab-generator[4635]: Ignoring "noauto" option for root device
	[  +0.501922] kauditd_printk_skb: 140 callbacks suppressed
	[  +4.392758] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +1.283986] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.357724] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.125526] systemd-fstab-generator[6052]: Ignoring "noauto" option for root device
	[  +0.209252] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [06a86e247a38] <==
	{"level":"info","ts":"2024-06-04T21:56:06.018579Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"14.609314ms"}
	{"level":"info","ts":"2024-06-04T21:56:06.045855Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-04T21:56:06.071735Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"25b169beae28ca6","local-member-id":"b468202ad3019868","commit-index":579}
	{"level":"info","ts":"2024-06-04T21:56:06.084473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-04T21:56:06.084623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 became follower at term 2"}
	{"level":"info","ts":"2024-06-04T21:56:06.084639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b468202ad3019868 [peers: [], term: 2, commit: 579, applied: 0, lastindex: 579, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-04T21:56:06.098714Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-04T21:56:06.138866Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":542}
	{"level":"info","ts":"2024-06-04T21:56:06.159795Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-04T21:56:06.176868Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b468202ad3019868","timeout":"7s"}
	{"level":"info","ts":"2024-06-04T21:56:06.182206Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b468202ad3019868"}
	{"level":"info","ts":"2024-06-04T21:56:06.182245Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b468202ad3019868","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-04T21:56:06.18505Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-04T21:56:06.185751Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:06.185823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:06.185837Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:06.186232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 switched to configuration voters=(12999675692705749096)"}
	{"level":"info","ts":"2024-06-04T21:56:06.192381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-04T21:56:06.19453Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"25b169beae28ca6","local-member-id":"b468202ad3019868","added-peer-id":"b468202ad3019868","added-peer-peer-urls":["https://172.20.136.157:2380"]}
	{"level":"info","ts":"2024-06-04T21:56:06.196009Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25b169beae28ca6","local-member-id":"b468202ad3019868","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T21:56:06.196043Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T21:56:06.194751Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.136.157:2380"}
	{"level":"info","ts":"2024-06-04T21:56:06.203356Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.136.157:2380"}
	{"level":"info","ts":"2024-06-04T21:56:06.211727Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b468202ad3019868","initial-advertise-peer-urls":["https://172.20.136.157:2380"],"listen-peer-urls":["https://172.20.136.157:2380"],"advertise-client-urls":["https://172.20.136.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.136.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-04T21:56:06.211757Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [2569b5b0ea5e] <==
	{"level":"info","ts":"2024-06-04T21:56:11.269529Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"25b169beae28ca6","local-member-id":"b468202ad3019868","added-peer-id":"b468202ad3019868","added-peer-peer-urls":["https://172.20.136.157:2380"]}
	{"level":"info","ts":"2024-06-04T21:56:11.26967Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"25b169beae28ca6","local-member-id":"b468202ad3019868","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T21:56:11.269731Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T21:56:11.280987Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:11.281052Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:11.281071Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-04T21:56:11.291747Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-04T21:56:11.29197Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b468202ad3019868","initial-advertise-peer-urls":["https://172.20.136.157:2380"],"listen-peer-urls":["https://172.20.136.157:2380"],"advertise-client-urls":["https://172.20.136.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.136.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-04T21:56:11.292Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-04T21:56:11.292156Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.20.136.157:2380"}
	{"level":"info","ts":"2024-06-04T21:56:11.292168Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.20.136.157:2380"}
	{"level":"info","ts":"2024-06-04T21:56:12.332511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-04T21:56:12.332586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-04T21:56:12.332623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 received MsgPreVoteResp from b468202ad3019868 at term 2"}
	{"level":"info","ts":"2024-06-04T21:56:12.332859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 became candidate at term 3"}
	{"level":"info","ts":"2024-06-04T21:56:12.332939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 received MsgVoteResp from b468202ad3019868 at term 3"}
	{"level":"info","ts":"2024-06-04T21:56:12.332957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b468202ad3019868 became leader at term 3"}
	{"level":"info","ts":"2024-06-04T21:56:12.333085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b468202ad3019868 elected leader b468202ad3019868 at term 3"}
	{"level":"info","ts":"2024-06-04T21:56:12.339836Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b468202ad3019868","local-member-attributes":"{Name:functional-235400 ClientURLs:[https://172.20.136.157:2379]}","request-path":"/0/members/b468202ad3019868/attributes","cluster-id":"25b169beae28ca6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-04T21:56:12.340488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-04T21:56:12.34065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-04T21:56:12.349328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.136.157:2379"}
	{"level":"info","ts":"2024-06-04T21:56:12.353497Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-04T21:56:12.353521Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-04T21:56:12.360369Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:58:32 up 7 min,  0 users,  load average: 0.37, 0.40, 0.21
	Linux functional-235400 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e58df51fa22] <==
	I0604 21:56:14.682530       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0604 21:56:14.683840       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0604 21:56:14.684040       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0604 21:56:14.688134       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0604 21:56:14.688400       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0604 21:56:14.688658       1 policy_source.go:224] refreshing policies
	I0604 21:56:14.688986       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0604 21:56:14.689876       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0604 21:56:14.689922       1 aggregator.go:165] initial CRD sync complete...
	I0604 21:56:14.689930       1 autoregister_controller.go:141] Starting autoregister controller
	I0604 21:56:14.689935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0604 21:56:14.689940       1 cache.go:39] Caches are synced for autoregister controller
	I0604 21:56:14.691641       1 shared_informer.go:320] Caches are synced for configmaps
	I0604 21:56:14.701078       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0604 21:56:14.720856       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0604 21:56:14.740275       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0604 21:56:14.761290       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0604 21:56:15.515884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0604 21:56:16.729266       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0604 21:56:16.778826       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0604 21:56:16.953686       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0604 21:56:17.071885       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0604 21:56:17.103672       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0604 21:56:27.346158       1 controller.go:615] quota admission added evaluator for: endpoints
	I0604 21:56:27.513787       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4a79633a19fb] <==
	W0604 21:55:59.212297       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.248717       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.283893       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.299499       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.351838       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.375028       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.407213       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.415055       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.440595       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.521405       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.533147       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.575019       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.652764       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.675693       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.675804       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.682262       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.685284       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.696347       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.734898       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.743527       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.746342       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.753272       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.774314       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.789065       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0604 21:55:59.803362       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [142d77d5d2ae] <==
	
	
	==> kube-controller-manager [befd4f1880a7] <==
	I0604 21:56:27.326876       1 shared_informer.go:320] Caches are synced for daemon sets
	I0604 21:56:27.333710       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0604 21:56:27.333898       1 shared_informer.go:320] Caches are synced for deployment
	I0604 21:56:27.337668       1 shared_informer.go:320] Caches are synced for PVC protection
	I0604 21:56:27.339112       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0604 21:56:27.344207       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0604 21:56:27.347855       1 shared_informer.go:320] Caches are synced for stateful set
	I0604 21:56:27.348340       1 shared_informer.go:320] Caches are synced for PV protection
	I0604 21:56:27.355282       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0604 21:56:27.356085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0604 21:56:27.356455       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0604 21:56:27.356668       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0604 21:56:27.359221       1 shared_informer.go:320] Caches are synced for persistent volume
	I0604 21:56:27.360623       1 shared_informer.go:320] Caches are synced for job
	I0604 21:56:27.364068       1 shared_informer.go:320] Caches are synced for cronjob
	I0604 21:56:27.367490       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0604 21:56:27.371843       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0604 21:56:27.395709       1 shared_informer.go:320] Caches are synced for resource quota
	I0604 21:56:27.444573       1 shared_informer.go:320] Caches are synced for resource quota
	I0604 21:56:27.481877       1 shared_informer.go:320] Caches are synced for disruption
	I0604 21:56:27.482510       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0604 21:56:27.582555       1 shared_informer.go:320] Caches are synced for attach detach
	I0604 21:56:27.987605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0604 21:56:27.987639       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0604 21:56:28.000004       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [9efb527eaa1b] <==
	I0604 21:53:42.077508       1 server_linux.go:69] "Using iptables proxy"
	I0604 21:53:42.091524       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.136.157"]
	I0604 21:53:42.147585       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 21:53:42.147699       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 21:53:42.147725       1 server_linux.go:165] "Using iptables Proxier"
	I0604 21:53:42.152793       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 21:53:42.153967       1 server.go:872] "Version info" version="v1.30.1"
	I0604 21:53:42.154066       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 21:53:42.155849       1 config.go:192] "Starting service config controller"
	I0604 21:53:42.156086       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 21:53:42.156133       1 config.go:101] "Starting endpoint slice config controller"
	I0604 21:53:42.156943       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 21:53:42.161617       1 config.go:319] "Starting node config controller"
	I0604 21:53:42.161704       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 21:53:42.256809       1 shared_informer.go:320] Caches are synced for service config
	I0604 21:53:42.257307       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0604 21:53:42.264535       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4d701ba89b7] <==
	I0604 21:56:16.553698       1 server_linux.go:69] "Using iptables proxy"
	I0604 21:56:16.579708       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.136.157"]
	I0604 21:56:16.694586       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 21:56:16.694650       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 21:56:16.694671       1 server_linux.go:165] "Using iptables Proxier"
	I0604 21:56:16.708717       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 21:56:16.709317       1 server.go:872] "Version info" version="v1.30.1"
	I0604 21:56:16.709747       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 21:56:16.711751       1 config.go:192] "Starting service config controller"
	I0604 21:56:16.711980       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 21:56:16.712385       1 config.go:101] "Starting endpoint slice config controller"
	I0604 21:56:16.712545       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 21:56:16.713512       1 config.go:319] "Starting node config controller"
	I0604 21:56:16.715494       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 21:56:16.812530       1 shared_informer.go:320] Caches are synced for service config
	I0604 21:56:16.813736       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0604 21:56:16.815884       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [52b42c3ade19] <==
	
	
	==> kube-scheduler [7aef2f692351] <==
	I0604 21:56:13.149641       1 serving.go:380] Generated self-signed cert in-memory
	W0604 21:56:14.629392       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0604 21:56:14.629482       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0604 21:56:14.629497       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0604 21:56:14.629508       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0604 21:56:14.682411       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0604 21:56:14.685562       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 21:56:14.690221       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0604 21:56:14.690415       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0604 21:56:14.691491       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0604 21:56:14.691411       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0604 21:56:14.792368       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.824884    5317 kubelet_node_status.go:112] "Node was previously registered" node="functional-235400"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.825233    5317 kubelet_node_status.go:76] "Successfully registered node" node="functional-235400"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.827650    5317 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.828971    5317 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.933080    5317 apiserver.go:52] "Watching apiserver"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.938080    5317 topology_manager.go:215] "Topology Admit Handler" podUID="144b79dc-c192-4e05-a481-2047f1a943c9" podNamespace="kube-system" podName="kube-proxy-2xs47"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.938242    5317 topology_manager.go:215] "Topology Admit Handler" podUID="8ce65d0a-5c28-4a96-a273-1c7987dcffb1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gfcww"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.938343    5317 topology_manager.go:215] "Topology Admit Handler" podUID="787187d8-02e6-447b-a0a1-dc664d9226e5" podNamespace="kube-system" podName="storage-provisioner"
	Jun 04 21:56:14 functional-235400 kubelet[5317]: I0604 21:56:14.961702    5317 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 04 21:56:15 functional-235400 kubelet[5317]: I0604 21:56:15.042854    5317 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/787187d8-02e6-447b-a0a1-dc664d9226e5-tmp\") pod \"storage-provisioner\" (UID: \"787187d8-02e6-447b-a0a1-dc664d9226e5\") " pod="kube-system/storage-provisioner"
	Jun 04 21:56:15 functional-235400 kubelet[5317]: I0604 21:56:15.042989    5317 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/144b79dc-c192-4e05-a481-2047f1a943c9-xtables-lock\") pod \"kube-proxy-2xs47\" (UID: \"144b79dc-c192-4e05-a481-2047f1a943c9\") " pod="kube-system/kube-proxy-2xs47"
	Jun 04 21:56:15 functional-235400 kubelet[5317]: I0604 21:56:15.043033    5317 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/144b79dc-c192-4e05-a481-2047f1a943c9-lib-modules\") pod \"kube-proxy-2xs47\" (UID: \"144b79dc-c192-4e05-a481-2047f1a943c9\") " pod="kube-system/kube-proxy-2xs47"
	Jun 04 21:56:15 functional-235400 kubelet[5317]: I0604 21:56:15.820731    5317 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ce90600d64ca0936ce19132bae3fe0657e91e8e0b4940abd79b926ddda5741"
	Jun 04 21:56:19 functional-235400 kubelet[5317]: I0604 21:56:19.063832    5317 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 04 21:56:20 functional-235400 kubelet[5317]: I0604 21:56:20.370569    5317 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 04 21:57:10 functional-235400 kubelet[5317]: E0604 21:57:10.092514    5317 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 21:57:10 functional-235400 kubelet[5317]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 21:57:10 functional-235400 kubelet[5317]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 21:57:10 functional-235400 kubelet[5317]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 21:57:10 functional-235400 kubelet[5317]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 21:58:10 functional-235400 kubelet[5317]: E0604 21:58:10.099002    5317 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 21:58:10 functional-235400 kubelet[5317]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 21:58:10 functional-235400 kubelet[5317]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 21:58:10 functional-235400 kubelet[5317]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 21:58:10 functional-235400 kubelet[5317]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [8c5e6908fc8d] <==
	I0604 21:56:16.413946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0604 21:56:16.455293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0604 21:56:16.455341       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0604 21:56:33.887265       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0604 21:56:33.887578       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-235400_93f5796a-2c51-4a8a-95ec-2d9617523286!
	I0604 21:56:33.888086       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69781851-73a8-46e4-a46e-521b9013da9a", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-235400_93f5796a-2c51-4a8a-95ec-2d9617523286 became leader
	I0604 21:56:33.988029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-235400_93f5796a-2c51-4a8a-95ec-2d9617523286!
	
	
	==> storage-provisioner [98aaee861666] <==
	I0604 21:53:49.255272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0604 21:53:49.270477       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0604 21:53:49.270596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0604 21:53:49.291821       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0604 21:53:49.292772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-235400_45f5eab4-db91-4a84-9336-9e687a7616ca!
	I0604 21:53:49.294478       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69781851-73a8-46e4-a46e-521b9013da9a", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-235400_45f5eab4-db91-4a84-9336-9e687a7616ca became leader
	I0604 21:53:49.401058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-235400_45f5eab4-db91-4a84-9336-9e687a7616ca!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:58:23.699537   13080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-235400 -n functional-235400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-235400 -n functional-235400: (12.8074209s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-235400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (36.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config unset cpus" to be -""- but got *"W0604 22:01:45.761952    2216 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 config get cpus: exit status 14 (283.1872ms)

                                                
                                                
** stderr ** 
	W0604 22:01:46.087320    3824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0604 22:01:46.087320    3824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0604 22:01:46.358484    5668 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config get cpus" to be -""- but got *"W0604 22:01:46.655085    8328 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config unset cpus" to be -""- but got *"W0604 22:01:46.932290    9316 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 config get cpus: exit status 14 (242.7149ms)

                                                
                                                
** stderr ** 
	W0604 22:01:47.201497   13776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-235400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0604 22:01:47.201497   13776 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 service --namespace=default --https --url hello-node: exit status 1 (15.0535933s)

                                                
                                                
** stderr ** 
	W0604 22:02:34.796605    3728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-235400 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url --format={{.IP}}: exit status 1 (15.0208448s)

                                                
                                                
** stderr ** 
	W0604 22:02:49.883400   12964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url: exit status 1 (15.0212852s)

                                                
                                                
** stderr ** 
	W0604 22:03:04.855300    2528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-235400 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (74.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- sh -c "ping -c 1 172.20.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- sh -c "ping -c 1 172.20.128.1": exit status 1 (10.5471433s)

                                                
                                                
-- stdout --
	PING 172.20.128.1 (172.20.128.1): 56 data bytes
	
	--- 172.20.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:22:16.218430    1076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.128.1) from pod (busybox-fc5497c4f-gbl9h): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- sh -c "ping -c 1 172.20.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- sh -c "ping -c 1 172.20.128.1": exit status 1 (10.5328214s)

                                                
                                                
-- stdout --
	PING 172.20.128.1 (172.20.128.1): 56 data bytes
	
	--- 172.20.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:22:27.344877    2896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.128.1) from pod (busybox-fc5497c4f-m2dsk): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- sh -c "ping -c 1 172.20.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- sh -c "ping -c 1 172.20.128.1": exit status 1 (10.5400113s)

                                                
                                                
-- stdout --
	PING 172.20.128.1 (172.20.128.1): 56 data bytes
	
	--- 172.20.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:22:38.427515    5784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.128.1) from pod (busybox-fc5497c4f-qm589): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-609500 -n ha-609500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-609500 -n ha-609500: (13.6937156s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 logs -n 25: (10.1904811s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-235400                    | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:06 UTC | 04 Jun 24 22:06 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-235400 image build -t     | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:06 UTC | 04 Jun 24 22:06 UTC |
	|         | localhost/my-image:functional-235400 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-235400 image ls           | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:06 UTC | 04 Jun 24 22:06 UTC |
	| delete  | -p functional-235400                 | functional-235400 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:08 UTC | 04 Jun 24 22:09 UTC |
	| start   | -p ha-609500 --wait=true             | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:09 UTC | 04 Jun 24 22:21 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- apply -f             | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- rollout status       | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- get pods -o          | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- get pods -o          | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-gbl9h --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-m2dsk --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-qm589 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-gbl9h --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-m2dsk --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-qm589 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-gbl9h -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-m2dsk -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-qm589 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- get pods -o          | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-gbl9h              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC |                     |
	|         | busybox-fc5497c4f-gbl9h -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-m2dsk              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC |                     |
	|         | busybox-fc5497c4f-m2dsk -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC | 04 Jun 24 22:22 UTC |
	|         | busybox-fc5497c4f-qm589              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-609500 -- exec                 | ha-609500         | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:22 UTC |                     |
	|         | busybox-fc5497c4f-qm589 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.128.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 22:09:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 22:09:13.628693    4212 out.go:291] Setting OutFile to fd 1068 ...
	I0604 22:09:13.629002    4212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:09:13.629002    4212 out.go:304] Setting ErrFile to fd 884...
	I0604 22:09:13.629002    4212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:09:13.654995    4212 out.go:298] Setting JSON to false
	I0604 22:09:13.659956    4212 start.go:129] hostinfo: {"hostname":"minikube6","uptime":86203,"bootTime":1717452750,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 22:09:13.659956    4212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 22:09:13.664444    4212 out.go:177] * [ha-609500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 22:09:13.671226    4212 notify.go:220] Checking for updates...
	I0604 22:09:13.673771    4212 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:09:13.676363    4212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 22:09:13.678936    4212 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 22:09:13.681495    4212 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 22:09:13.684035    4212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 22:09:13.686148    4212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 22:09:19.404594    4212 out.go:177] * Using the hyperv driver based on user configuration
	I0604 22:09:19.408723    4212 start.go:297] selected driver: hyperv
	I0604 22:09:19.408723    4212 start.go:901] validating driver "hyperv" against <nil>
	I0604 22:09:19.408723    4212 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 22:09:19.462869    4212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 22:09:19.463609    4212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:09:19.463609    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:09:19.463609    4212 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0604 22:09:19.463609    4212 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 22:09:19.463609    4212 start.go:340] cluster config:
	{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:09:19.464866    4212 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 22:09:19.469379    4212 out.go:177] * Starting "ha-609500" primary control-plane node in "ha-609500" cluster
	I0604 22:09:19.471564    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:09:19.471564    4212 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 22:09:19.471564    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:09:19.471564    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:09:19.471564    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:09:19.474946    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:09:19.474946    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json: {Name:mkc3bcc5a7016d2cd3c4b8a4fd482a3f874b5e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:09:19.475850    4212 start.go:360] acquireMachinesLock for ha-609500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:09:19.475850    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500"
	I0604 22:09:19.477320    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:09:19.477320    4212 start.go:125] createHost starting for "" (driver="hyperv")
	I0604 22:09:19.477658    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:09:19.482497    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:09:19.482497    4212 client.go:168] LocalClient.Create starting
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:09:19.483683    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:09:19.483714    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:09:19.483880    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:09:21.695092    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:09:21.695092    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:21.705639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:09:23.524881    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:09:23.534211    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:23.534350    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:09:25.161177    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:09:25.161177    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:25.161561    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:09:28.900813    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:09:28.915352    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:28.918065    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:09:29.464073    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:09:29.777217    4212 main.go:141] libmachine: Creating VM...
	I0604 22:09:29.777639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:09:32.778698    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:09:32.791354    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:32.791497    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:09:32.791618    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:09:34.638193    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:09:34.638193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:34.638193    4212 main.go:141] libmachine: Creating VHD
	I0604 22:09:34.638293    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:09:38.576164    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 89A539C1-C84F-40F1-9263-948AF4BDDF8B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:09:38.576164    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:38.587724    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:09:38.587724    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:09:38.600307    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:09:41.888162    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:41.900236    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:41.900236    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd' -SizeBytes 20000MB
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:09:48.451972    4212 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-609500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:09:48.452170    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:48.452170    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500 -DynamicMemoryEnabled $false
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500 -Count 2
	I0604 22:09:53.191068    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:53.191283    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:53.191283    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\boot2docker.iso'
	I0604 22:09:55.933930    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:55.934151    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:55.934151    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd'
	I0604 22:09:58.752394    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:58.752394    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:58.752394    4212 main.go:141] libmachine: Starting VM...
	I0604 22:09:58.752737    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500
	I0604 22:10:01.953951    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:01.953951    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:01.953951    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:10:01.955958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:04.389904    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:04.389904    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:04.392421    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:07.025192    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:07.032849    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:08.043187    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:10.378581    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:10.384712    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:10.384712    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:13.082562    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:13.082562    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:14.090570    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:16.395237    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:16.400903    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:16.400996    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:19.080360    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:19.080437    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:20.093055    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:22.425895    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:22.438241    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:22.438241    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:25.075139    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:25.075301    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:26.090545    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:31.176260    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:31.176260    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:31.189309    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:33.446338    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:33.458818    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:33.458905    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:10:33.458905    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:35.749407    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:35.762108    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:35.762108    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:38.470367    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:38.470367    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:38.489285    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:38.505466    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:38.505466    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:10:38.636933    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:10:38.636933    4212 buildroot.go:166] provisioning hostname "ha-609500"
	I0604 22:10:38.636933    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:40.883315    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:40.883508    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:40.883607    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:43.544680    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:43.544680    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:43.551335    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:43.551660    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:43.551660    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500 && echo "ha-609500" | sudo tee /etc/hostname
	I0604 22:10:43.717271    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500
	
	I0604 22:10:43.717413    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:45.974635    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:45.974829    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:45.974829    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:48.700837    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:48.700928    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:48.707608    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:48.708115    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:48.708224    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:10:48.856465    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:10:48.856465    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:10:48.857002    4212 buildroot.go:174] setting up certificates
	I0604 22:10:48.857002    4212 provision.go:84] configureAuth start
	I0604 22:10:48.857085    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:51.122268    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:51.122335    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:51.122335    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:53.842112    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:53.854634    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:53.854725    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:56.102411    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:56.102411    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:56.114276    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:58.794096    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:58.805305    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:58.805476    4212 provision.go:143] copyHostCerts
	I0604 22:10:58.805602    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:10:58.805602    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:10:58.805602    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:10:58.806458    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:10:58.807323    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:10:58.807323    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:10:58.807323    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:10:58.808011    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:10:58.809050    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:10:58.809219    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:10:58.809219    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:10:58.809219    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:10:58.810427    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500 san=[127.0.0.1 172.20.131.101 ha-609500 localhost minikube]
	I0604 22:10:59.098187    4212 provision.go:177] copyRemoteCerts
	I0604 22:10:59.114184    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:10:59.114184    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:01.346160    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:01.346160    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:01.359602    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:04.065314    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:04.066033    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:04.066033    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:04.177718    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0634487s)
	I0604 22:11:04.177756    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:11:04.177756    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 22:11:04.240542    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:11:04.241195    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:11:04.289231    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:11:04.289666    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0604 22:11:04.344992    4212 provision.go:87] duration metric: took 15.487718s to configureAuth
	I0604 22:11:04.344992    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:11:04.344992    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:11:04.345682    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:06.576728    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:06.576897    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:06.576897    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:09.234198    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:09.246134    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:09.250815    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:09.251455    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:09.251455    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:11:09.382003    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:11:09.382003    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:11:09.382003    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:11:09.382598    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:11.621981    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:11.634868    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:11.634868    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:14.385008    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:14.397190    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:14.402496    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:14.403412    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:14.403412    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:11:14.564582    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:11:14.564582    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:16.836580    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:16.836580    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:16.844418    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:19.536830    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:19.549576    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:19.555869    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:19.555869    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:19.556405    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:11:21.788402    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:11:21.788402    4212 machine.go:97] duration metric: took 48.3291099s to provisionDockerMachine
	I0604 22:11:21.788402    4212 client.go:171] duration metric: took 2m2.3049355s to LocalClient.Create
	I0604 22:11:21.788402    4212 start.go:167] duration metric: took 2m2.3049672s to libmachine.API.Create "ha-609500"
	I0604 22:11:21.788402    4212 start.go:293] postStartSetup for "ha-609500" (driver="hyperv")
	I0604 22:11:21.788402    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:11:21.802521    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:11:21.802521    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:24.093649    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:24.096991    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:24.096991    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:26.827074    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:26.827074    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:26.829488    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:26.938968    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1364053s)
	I0604 22:11:26.950972    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:11:26.960331    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:11:26.960331    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:11:26.961029    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:11:26.962234    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:11:26.962234    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:11:26.975614    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:11:26.994900    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:11:27.055995    4212 start.go:296] duration metric: took 5.2674685s for postStartSetup
	I0604 22:11:27.058600    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:29.317103    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:29.329697    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:29.330044    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:32.044132    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:32.044132    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:32.044132    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:11:32.060697    4212 start.go:128] duration metric: took 2m12.5812428s to createHost
	I0604 22:11:32.060823    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:36.973663    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:36.973663    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:36.992635    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:36.992635    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:36.992635    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:11:37.125467    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539097.132815399
	
	I0604 22:11:37.125467    4212 fix.go:216] guest clock: 1717539097.132815399
	I0604 22:11:37.126001    4212 fix.go:229] Guest: 2024-06-04 22:11:37.132815399 +0000 UTC Remote: 2024-06-04 22:11:32.0608233 +0000 UTC m=+138.605005501 (delta=5.071992099s)
	I0604 22:11:37.126001    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:41.994201    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:41.994201    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:42.000780    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:42.000935    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:42.000935    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539097
	I0604 22:11:42.144767    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:11:37 UTC 2024
	
	I0604 22:11:42.144767    4212 fix.go:236] clock set: Tue Jun  4 22:11:37 UTC 2024
	 (err=<nil>)
	I0604 22:11:42.144767    4212 start.go:83] releasing machines lock for "ha-609500", held for 2m22.6677822s
	I0604 22:11:42.144767    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:44.386347    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:44.400710    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:44.400710    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:47.098251    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:47.098428    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:47.102159    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:11:47.102159    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:47.118618    4212 ssh_runner.go:195] Run: cat /version.json
	I0604 22:11:47.118618    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:49.395623    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:49.395823    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:49.395823    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:52.144068    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:52.156605    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:52.156730    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:52.182765    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:52.182765    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:52.183349    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:52.247337    4212 ssh_runner.go:235] Completed: cat /version.json: (5.1286771s)
	I0604 22:11:52.260959    4212 ssh_runner.go:195] Run: systemctl --version
	I0604 22:11:52.328685    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.226484s)
	I0604 22:11:52.340641    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0604 22:11:52.351598    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:11:52.363137    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:11:52.396484    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:11:52.396573    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:11:52.396573    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:11:52.445458    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:11:52.483257    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:11:52.503276    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:11:52.514143    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:11:52.552096    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:11:52.595083    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:11:52.631029    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:11:52.663789    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:11:52.700208    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:11:52.734628    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:11:52.770366    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:11:52.803140    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:11:52.837164    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:11:52.868958    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:53.093971    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:11:53.134498    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:11:53.146791    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:11:53.188485    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:11:53.225944    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:11:53.277312    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:11:53.316314    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:11:53.357308    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:11:53.423499    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:11:53.451404    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:11:53.499972    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:11:53.519118    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:11:53.539973    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:11:53.592956    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:11:53.808329    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:11:54.018198    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:11:54.018530    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:11:54.064278    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:54.280051    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:11:56.839244    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5591719s)
	I0604 22:11:56.849454    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:11:56.890756    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:11:56.929830    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:11:57.133563    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:11:57.333219    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:57.541120    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:11:57.589943    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:11:57.631087    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:57.837317    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:11:57.956121    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:11:57.970992    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:11:57.983114    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:11:57.994408    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:11:58.013762    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:11:58.074595    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:11:58.085076    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:11:58.129196    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:11:58.166609    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:11:58.166609    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:11:58.174385    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:11:58.174385    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:11:58.188520    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:11:58.195861    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:11:58.235626    4212 kubeadm.go:877] updating cluster {Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 22:11:58.235626    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:11:58.240909    4212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 22:11:58.273092    4212 docker.go:685] Got preloaded images: 
	I0604 22:11:58.273092    4212 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0604 22:11:58.286408    4212 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 22:11:58.321782    4212 ssh_runner.go:195] Run: which lz4
	I0604 22:11:58.327365    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0604 22:11:58.340099    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 22:11:58.349743    4212 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 22:11:58.349950    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0604 22:12:00.288171    4212 docker.go:649] duration metric: took 1.9583784s to copy over tarball
	I0604 22:12:00.299872    4212 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 22:12:08.779294    4212 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4776668s)
	I0604 22:12:08.793962    4212 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 22:12:08.866528    4212 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 22:12:08.888348    4212 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0604 22:12:08.935856    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:12:09.152358    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:12:12.183325    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0309419s)
	I0604 22:12:12.193567    4212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 22:12:12.218652    4212 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 22:12:12.218652    4212 cache_images.go:84] Images are preloaded, skipping loading
	I0604 22:12:12.218652    4212 kubeadm.go:928] updating node { 172.20.131.101 8443 v1.30.1 docker true true} ...
	I0604 22:12:12.219189    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.131.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:12:12.228353    4212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 22:12:12.264775    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:12:12.264775    4212 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 22:12:12.264775    4212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 22:12:12.264917    4212 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.131.101 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-609500 NodeName:ha-609500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.131.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.131.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 22:12:12.265496    4212 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.131.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-609500"
	  kubeletExtraArgs:
	    node-ip: 172.20.131.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.131.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 22:12:12.265625    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:12:12.278633    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:12:12.306365    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:12:12.307310    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:12:12.319473    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:12:12.336471    4212 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 22:12:12.349039    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0604 22:12:12.368317    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0604 22:12:12.402617    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:12:12.435954    4212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0604 22:12:12.468305    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0604 22:12:12.512058    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:12:12.514862    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:12:12.550878    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:12:12.749703    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:12:12.779121    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.131.101
	I0604 22:12:12.779431    4212 certs.go:194] generating shared ca certs ...
	I0604 22:12:12.779493    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:12.780401    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:12:12.780739    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:12:12.780923    4212 certs.go:256] generating profile certs ...
	I0604 22:12:12.781740    4212 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:12:12.781909    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt with IP's: []
	I0604 22:12:13.030544    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt ...
	I0604 22:12:13.030544    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt: {Name:mk76295a403c9aeb3abfbf53fa2b5074ca3f3840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.036636    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key ...
	I0604 22:12:13.036636    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key: {Name:mke61efda45fff399bb2b7780b981438fd466b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.037858    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f
	I0604 22:12:13.038950    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.143.254]
	I0604 22:12:13.310698    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f ...
	I0604 22:12:13.310698    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f: {Name:mk09bea6f2657d7aad3850bfc0259de68b634b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.316201    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f ...
	I0604 22:12:13.316201    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f: {Name:mk609125828db1d8a4dca93b261182711db39ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.316889    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:12:13.339963    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:12:13.341224    4212 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:12:13.341859    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt with IP's: []
	I0604 22:12:13.500513    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt ...
	I0604 22:12:13.500513    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt: {Name:mk345d8640268e77e7bedddb09b0d06028d9e079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.502089    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key ...
	I0604 22:12:13.502089    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key: {Name:mk1562ab6c3b41bbfe12183bd74dcf651200f9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.503979    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:12:13.504414    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:12:13.504651    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:12:13.510823    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:12:13.518403    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:12:13.519063    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:12:13.519063    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:12:13.519411    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:12:13.519669    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:12:13.519915    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:12:13.520167    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:12:13.521640    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:12:13.573390    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:12:13.629212    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:12:13.682187    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:12:13.732428    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0604 22:12:13.778800    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:12:13.835734    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:12:13.885900    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:12:13.935256    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:12:13.986239    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:12:14.033867    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:12:14.091738    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 22:12:14.139066    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:12:14.163378    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:12:14.197003    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.206778    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.219282    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.242468    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:12:14.276739    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:12:14.312375    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.325881    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.339953    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.360666    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:12:14.396566    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:12:14.433330    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.443014    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.455553    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.480174    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:12:14.513334    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:12:14.520255    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:12:14.520838    4212 kubeadm.go:391] StartCluster: {Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:12:14.531475    4212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 22:12:14.574684    4212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 22:12:14.611675    4212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 22:12:14.646694    4212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 22:12:14.667660    4212 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 22:12:14.667718    4212 kubeadm.go:156] found existing configuration files:
	
	I0604 22:12:14.679059    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 22:12:14.698727    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 22:12:14.712319    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 22:12:14.746741    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 22:12:14.768900    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 22:12:14.785345    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 22:12:14.817601    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 22:12:14.836810    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 22:12:14.850114    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 22:12:14.880024    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 22:12:14.897317    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 22:12:14.910125    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 22:12:14.928321    4212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 22:12:15.395513    4212 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 22:12:31.600002    4212 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 22:12:31.600124    4212 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 22:12:31.600332    4212 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 22:12:31.600612    4212 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 22:12:31.600781    4212 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 22:12:31.600781    4212 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 22:12:31.603627    4212 out.go:204]   - Generating certificates and keys ...
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 22:12:31.604267    4212 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-609500 localhost] and IPs [172.20.131.101 127.0.0.1 ::1]
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 22:12:31.604887    4212 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-609500 localhost] and IPs [172.20.131.101 127.0.0.1 ::1]
	I0604 22:12:31.605032    4212 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 22:12:31.605786    4212 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 22:12:31.605936    4212 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 22:12:31.606053    4212 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 22:12:31.606155    4212 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 22:12:31.606234    4212 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 22:12:31.606234    4212 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 22:12:31.606234    4212 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 22:12:31.611367    4212 out.go:204]   - Booting up control plane ...
	I0604 22:12:31.611541    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 22:12:31.611623    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 22:12:31.611750    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 22:12:31.612632    4212 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 22:12:31.612632    4212 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 22:12:31.612632    4212 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004475988s
	I0604 22:12:31.613206    4212 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 22:12:31.613385    4212 kubeadm.go:309] [api-check] The API server is healthy after 9.002398182s
	I0604 22:12:31.613385    4212 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 22:12:31.613385    4212 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 22:12:31.613922    4212 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 22:12:31.614269    4212 kubeadm.go:309] [mark-control-plane] Marking the node ha-609500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 22:12:31.614269    4212 kubeadm.go:309] [bootstrap-token] Using token: 1j4sj8.yfunpww2vrg63q4l
	I0604 22:12:31.618451    4212 out.go:204]   - Configuring RBAC rules ...
	I0604 22:12:31.618451    4212 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 22:12:31.619597    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 22:12:31.619597    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 22:12:31.620140    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 22:12:31.620367    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 22:12:31.620584    4212 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 22:12:31.620815    4212 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 22:12:31.621037    4212 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 22:12:31.621037    4212 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 22:12:31.621037    4212 kubeadm.go:309] 
	I0604 22:12:31.621310    4212 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 22:12:31.621310    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 22:12:31.621446    4212 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 22:12:31.621446    4212 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 22:12:31.621446    4212 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 22:12:31.622753    4212 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 22:12:31.622753    4212 kubeadm.go:309] 
	I0604 22:12:31.622753    4212 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 22:12:31.622753    4212 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 22:12:31.622753    4212 kubeadm.go:309] 
	I0604 22:12:31.623339    4212 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1j4sj8.yfunpww2vrg63q4l \
	I0604 22:12:31.623339    4212 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 22:12:31.623339    4212 kubeadm.go:309] 	--control-plane 
	I0604 22:12:31.623339    4212 kubeadm.go:309] 
	I0604 22:12:31.623339    4212 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 22:12:31.623339    4212 kubeadm.go:309] 
	I0604 22:12:31.624034    4212 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1j4sj8.yfunpww2vrg63q4l \
	I0604 22:12:31.624227    4212 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 22:12:31.624227    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:12:31.624227    4212 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 22:12:31.625439    4212 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0604 22:12:31.634469    4212 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0604 22:12:31.653088    4212 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0604 22:12:31.653088    4212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0604 22:12:31.707586    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0604 22:12:32.492793    4212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 22:12:32.507981    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500 minikube.k8s.io/updated_at=2024_06_04T22_12_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=true
	I0604 22:12:32.507981    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:32.533228    4212 ops.go:34] apiserver oom_adj: -16
	I0604 22:12:32.758890    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:33.267466    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:33.759564    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:34.270022    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:34.789017    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:35.277393    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:35.765950    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:36.273843    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:36.767726    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:37.273170    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:37.769398    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:38.274584    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:38.771162    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:39.269284    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:39.762128    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:40.271995    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:40.760625    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:41.266860    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:41.766893    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:42.272816    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:42.775274    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:43.261041    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:43.776255    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.271743    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.773991    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.917568    4212 kubeadm.go:1107] duration metric: took 12.4246731s to wait for elevateKubeSystemPrivileges
	W0604 22:12:44.917568    4212 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 22:12:44.917568    4212 kubeadm.go:393] duration metric: took 30.3964828s to StartCluster
	I0604 22:12:44.917568    4212 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:44.917568    4212 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:12:44.919316    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:44.921131    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 22:12:44.921227    4212 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:12:44.921227    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:12:44.921285    4212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0604 22:12:44.921419    4212 addons.go:69] Setting storage-provisioner=true in profile "ha-609500"
	I0604 22:12:44.921419    4212 addons.go:69] Setting default-storageclass=true in profile "ha-609500"
	I0604 22:12:44.921419    4212 addons.go:234] Setting addon storage-provisioner=true in "ha-609500"
	I0604 22:12:44.921680    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:12:44.921750    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:12:44.921481    4212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-609500"
	I0604 22:12:44.922613    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:44.922613    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:45.092000    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 22:12:45.515336    4212 start.go:946] {"host.minikube.internal": 172.20.128.1} host record injected into CoreDNS's ConfigMap
	I0604 22:12:47.331262    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:47.331262    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:47.332063    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:12:47.333646    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 22:12:47.335219    4212 cert_rotation.go:137] Starting client certificate rotation controller
	I0604 22:12:47.335484    4212 addons.go:234] Setting addon default-storageclass=true in "ha-609500"
	I0604 22:12:47.335484    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:12:47.336306    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:47.347619    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:47.347619    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:47.352777    4212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 22:12:47.355094    4212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 22:12:47.355094    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 22:12:47.355094    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:49.747419    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:49.747419    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:49.757056    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:12:49.837650    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:49.837874    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:49.837963    4212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 22:12:49.837963    4212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 22:12:49.838028    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:12:52.661631    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:12:52.661631    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:52.662506    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:12:52.830537    4212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 22:12:55.022069    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:12:55.022069    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:55.022069    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:12:55.163616    4212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 22:12:55.323129    4212 round_trippers.go:463] GET https://172.20.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0604 22:12:55.323129    4212 round_trippers.go:469] Request Headers:
	I0604 22:12:55.323129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:12:55.323129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:12:55.337375    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:12:55.338254    4212 round_trippers.go:463] PUT https://172.20.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0604 22:12:55.338254    4212 round_trippers.go:469] Request Headers:
	I0604 22:12:55.338344    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:12:55.338344    4212 round_trippers.go:473]     Content-Type: application/json
	I0604 22:12:55.338344    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:12:55.345690    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:12:55.354049    4212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0604 22:12:55.356724    4212 addons.go:510] duration metric: took 10.4353533s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0604 22:12:55.356724    4212 start.go:245] waiting for cluster config update ...
	I0604 22:12:55.356724    4212 start.go:254] writing updated cluster config ...
	I0604 22:12:55.360090    4212 out.go:177] 
	I0604 22:12:55.373276    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:12:55.373276    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:12:55.379958    4212 out.go:177] * Starting "ha-609500-m02" control-plane node in "ha-609500" cluster
	I0604 22:12:55.384338    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:12:55.384338    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:12:55.384338    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:12:55.385017    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:12:55.385104    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:12:55.390174    4212 start.go:360] acquireMachinesLock for ha-609500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:12:55.390174    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500-m02"
	I0604 22:12:55.390750    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:12:55.390750    4212 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0604 22:12:55.393292    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:12:55.394145    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:12:55.394145    4212 client.go:168] LocalClient.Create starting
	I0604 22:12:55.394145    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:12:55.394886    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:12:55.394923    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:12:55.395150    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:12:55.395329    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:12:55.395329    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:12:55.395482    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:12:57.415389    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:12:57.415389    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:57.426539    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:13:00.892986    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:13:00.892986    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:00.902643    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:13:04.868770    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:13:04.869580    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:04.872161    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:13:05.404387    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:13:05.942383    4212 main.go:141] libmachine: Creating VM...
	I0604 22:13:05.942383    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:09.029343    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:13:10.864221    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:13:10.864457    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:10.864457    4212 main.go:141] libmachine: Creating VHD
	I0604 22:13:10.864612    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:13:15.035846    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F212B8AB-F9CA-4C49-9ABF-10BCC6A6423A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:13:15.035846    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:15.035846    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:13:15.035846    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:13:15.050524    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:13:18.469043    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:18.469043    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:18.469233    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd' -SizeBytes 20000MB
	I0604 22:13:21.292465    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:21.292465    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:21.292548    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-609500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500-m02 -DynamicMemoryEnabled $false
	I0604 22:13:28.030036    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:28.030036    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:28.030199    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500-m02 -Count 2
	I0604 22:13:30.524025    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:30.524025    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:30.524431    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\boot2docker.iso'
	I0604 22:13:33.450121    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:33.450121    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:33.450678    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd'
	I0604 22:13:36.463800    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:36.463800    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:36.463800    4212 main.go:141] libmachine: Starting VM...
	I0604 22:13:36.464119    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500-m02
	I0604 22:13:39.890126    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:39.890126    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:39.890126    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:13:39.891070    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:45.275574    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:45.275574    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:46.278990    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:48.824347    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:48.824347    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:48.825052    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:51.727524    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:51.727524    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:52.737192    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:55.200193    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:55.200193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:55.201174    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:58.086538    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:58.086606    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:59.099925    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:01.589274    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:01.590269    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:01.590269    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:04.458502    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:14:04.458502    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:05.464476    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:07.964749    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:07.964749    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:07.964939    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:10.832001    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:14:10.833015    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:11.839434    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:14.297307    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:14.297963    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:14.297963    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:17.202167    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:17.202167    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:17.202777    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:19.626552    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:19.626552    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:19.626850    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:14:19.626850    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:24.999627    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:24.999691    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:25.006428    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:25.006428    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:25.006428    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:14:25.151596    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:14:25.151705    4212 buildroot.go:166] provisioning hostname "ha-609500-m02"
	I0604 22:14:25.151759    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:27.541150    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:27.541342    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:27.541342    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:30.432498    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:30.432498    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:30.438033    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:30.438420    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:30.438593    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500-m02 && echo "ha-609500-m02" | sudo tee /etc/hostname
	I0604 22:14:30.617297    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500-m02
	
	I0604 22:14:30.617343    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:33.034710    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:33.034792    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:33.034874    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:35.932833    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:35.932833    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:35.939110    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:35.939884    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:35.939884    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:14:36.100103    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:14:36.100297    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:14:36.100297    4212 buildroot.go:174] setting up certificates
	I0604 22:14:36.100297    4212 provision.go:84] configureAuth start
	I0604 22:14:36.100395    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:41.360099    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:41.360490    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:41.360557    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:43.841165    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:43.841631    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:43.841631    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:46.760256    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:46.760256    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:46.760256    4212 provision.go:143] copyHostCerts
	I0604 22:14:46.760256    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:14:46.760256    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:14:46.760256    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:14:46.762224    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:14:46.763339    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:14:46.763723    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:14:46.763723    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:14:46.764039    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:14:46.764337    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:14:46.765206    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:14:46.765206    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:14:46.765574    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:14:46.766864    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500-m02 san=[127.0.0.1 172.20.128.86 ha-609500-m02 localhost minikube]
	I0604 22:14:46.987872    4212 provision.go:177] copyRemoteCerts
	I0604 22:14:47.000211    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:14:47.000211    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:49.404969    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:49.404969    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:49.405229    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:52.350548    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:52.350548    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:52.351607    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:14:52.466654    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.466398s)
	I0604 22:14:52.466772    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:14:52.467329    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:14:52.523441    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:14:52.524097    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 22:14:52.581684    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:14:52.582153    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 22:14:52.640869    4212 provision.go:87] duration metric: took 16.5404365s to configureAuth
	I0604 22:14:52.640869    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:14:52.642063    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:14:52.642145    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:55.062634    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:55.062634    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:55.063023    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:57.956978    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:57.957743    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:57.964441    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:57.964575    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:57.964575    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:14:58.106314    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:14:58.106314    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:14:58.106568    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:14:58.106669    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:00.507901    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:00.507934    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:00.508140    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:03.387107    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:03.387107    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:03.393194    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:03.394191    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:03.394191    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.131.101"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:15:03.567230    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.131.101
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:15:03.567230    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:05.969823    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:05.970665    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:05.970665    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:08.860711    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:08.861671    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:08.868155    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:08.868155    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:08.868747    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:15:11.154072    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:15:11.154252    4212 machine.go:97] duration metric: took 51.526929s to provisionDockerMachine
	I0604 22:15:11.154252    4212 client.go:171] duration metric: took 2m15.7590028s to LocalClient.Create
	I0604 22:15:11.154347    4212 start.go:167] duration metric: took 2m15.7590028s to libmachine.API.Create "ha-609500"
	I0604 22:15:11.154347    4212 start.go:293] postStartSetup for "ha-609500-m02" (driver="hyperv")
	I0604 22:15:11.154402    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:15:11.171040    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:15:11.171040    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:13.587605    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:13.588185    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:13.588185    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:16.475759    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:16.475821    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:16.475821    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:16.589130    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.418045s)
	I0604 22:15:16.603270    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:15:16.613507    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:15:16.613507    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:15:16.613507    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:15:16.614863    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:15:16.614863    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:15:16.628498    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:15:16.655005    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:15:16.713603    4212 start.go:296] duration metric: took 5.5592106s for postStartSetup
	I0604 22:15:16.716731    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:19.119533    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:19.119710    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:19.119710    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:22.070373    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:22.070373    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:22.071456    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:15:22.077994    4212 start.go:128] duration metric: took 2m26.6860506s to createHost
	I0604 22:15:22.077994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:24.507290    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:24.507290    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:24.507491    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:27.403271    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:27.404042    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:27.409585    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:27.410233    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:27.410233    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:15:27.556979    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539327.558099855
	
	I0604 22:15:27.557073    4212 fix.go:216] guest clock: 1717539327.558099855
	I0604 22:15:27.557073    4212 fix.go:229] Guest: 2024-06-04 22:15:27.558099855 +0000 UTC Remote: 2024-06-04 22:15:22.0779942 +0000 UTC m=+368.620306501 (delta=5.480105655s)
	I0604 22:15:27.557073    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:32.897972    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:32.897972    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:32.903144    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:32.903144    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:32.903144    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539327
	I0604 22:15:33.070392    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:15:27 UTC 2024
	
	I0604 22:15:33.070392    4212 fix.go:236] clock set: Tue Jun  4 22:15:27 UTC 2024
	 (err=<nil>)
	I0604 22:15:33.070392    4212 start.go:83] releasing machines lock for "ha-609500-m02", held for 2m37.6783589s
	I0604 22:15:33.070392    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:35.486308    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:35.486541    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:35.486541    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:38.344967    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:38.345137    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:38.347715    4212 out.go:177] * Found network options:
	I0604 22:15:38.351892    4212 out.go:177]   - NO_PROXY=172.20.131.101
	W0604 22:15:38.355013    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:15:38.357598    4212 out.go:177]   - NO_PROXY=172.20.131.101
	W0604 22:15:38.363337    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:15:38.365105    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:15:38.369071    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:15:38.369229    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:38.378811    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 22:15:38.378811    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:40.869798    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:40.869798    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:40.870630    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:40.871077    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:40.871626    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:40.871704    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:43.799991    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:43.800668    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:43.800736    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:43.826729    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:43.826729    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:43.826729    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:44.005530    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6364131s)
	I0604 22:15:44.005530    4212 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.6266735s)
	W0604 22:15:44.005530    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:15:44.019532    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:15:44.051153    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:15:44.051297    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:15:44.051559    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:15:44.107587    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:15:44.147359    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:15:44.171846    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:15:44.186157    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:15:44.228452    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:15:44.266702    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:15:44.305645    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:15:44.344196    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:15:44.382847    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:15:44.418519    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:15:44.458098    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:15:44.500906    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:15:44.541186    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:15:44.583574    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:44.822323    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:15:44.858867    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:15:44.873173    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:15:44.917334    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:15:44.959777    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:15:45.010917    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:15:45.059638    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:15:45.106765    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:15:45.179152    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:15:45.211428    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:15:45.276394    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:15:45.296682    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:15:45.318004    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:15:45.369344    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:15:45.600327    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:15:45.835045    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:15:45.835110    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:15:45.890466    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:46.138804    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:15:48.745112    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6062866s)
	I0604 22:15:48.760094    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:15:48.807460    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:15:48.851484    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:15:49.096781    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:15:49.353120    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:49.608884    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:15:49.657861    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:15:49.702760    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:49.935380    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:15:50.061262    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:15:50.074020    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:15:50.085731    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:15:50.100702    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:15:50.124785    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:15:50.197531    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:15:50.207987    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:15:50.260653    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:15:50.303576    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:15:50.307808    4212 out.go:177]   - env NO_PROXY=172.20.131.101
	I0604 22:15:50.309870    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:15:50.318800    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:15:50.318800    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:15:50.332464    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:15:50.341446    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:15:50.369374    4212 mustload.go:65] Loading cluster: ha-609500
	I0604 22:15:50.369374    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:15:50.370922    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:15:52.798879    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:52.799713    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:52.799713    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:15:52.800027    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.128.86
	I0604 22:15:52.800027    4212 certs.go:194] generating shared ca certs ...
	I0604 22:15:52.800027    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:52.800749    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:15:52.801585    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:15:52.801585    4212 certs.go:256] generating profile certs ...
	I0604 22:15:52.802310    4212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:15:52.802310    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043
	I0604 22:15:52.802310    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.128.86 172.20.143.254]
	I0604 22:15:53.300810    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 ...
	I0604 22:15:53.301810    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043: {Name:mkebd533e18fdc3cf055acbe62a648019b0cef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:53.302124    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043 ...
	I0604 22:15:53.302124    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043: {Name:mk77abb44ef0f71fd51608e6bb570d80041136e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:53.303134    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:15:53.317401    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:15:53.317885    4212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:15:53.317885    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:15:53.318991    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:15:53.319134    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:15:53.319344    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:15:53.319549    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:15:53.319760    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:15:53.319760    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:15:53.319994    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:15:53.320237    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:15:53.320883    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:15:53.320883    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:15:53.321169    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:15:53.321169    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:15:53.321737    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:15:53.322027    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:15:53.322027    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:15:53.322705    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:15:53.322705    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:15:53.323039    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:58.693294    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:15:58.693294    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:58.693664    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:15:58.789834    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0604 22:15:58.800053    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0604 22:15:58.836664    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0604 22:15:58.847012    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0604 22:15:58.887173    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0604 22:15:58.896501    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0604 22:15:58.938621    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0604 22:15:58.947640    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0604 22:15:58.990235    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0604 22:15:58.999379    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0604 22:15:59.040281    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0604 22:15:59.046983    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0604 22:15:59.072650    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:15:59.130147    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:15:59.191378    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:15:59.251128    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:15:59.311138    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0604 22:15:59.369871    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:15:59.429361    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:15:59.483712    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:15:59.543770    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:15:59.599007    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:15:59.658768    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:15:59.719070    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0604 22:15:59.765039    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0604 22:15:59.803298    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0604 22:15:59.842209    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0604 22:15:59.879496    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0604 22:15:59.918288    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0604 22:15:59.958130    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0604 22:16:00.011512    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:16:00.037584    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:16:00.077750    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.086549    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.100135    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.125121    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:16:00.168074    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:16:00.205992    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.216370    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.232332    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.253143    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:16:00.295614    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:16:00.335800    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.344389    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.360026    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.387374    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:16:00.427632    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:16:00.436087    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:16:00.436372    4212 kubeadm.go:928] updating node {m02 172.20.128.86 8443 v1.30.1 docker true true} ...
	I0604 22:16:00.436372    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.128.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:16:00.436372    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:16:00.450759    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:16:00.491043    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:16:00.491043    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:16:00.506544    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:16:00.530861    4212 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 22:16:00.546165    4212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 22:16:00.577665    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0604 22:16:00.577946    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0604 22:16:00.577946    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0604 22:16:01.728137    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:16:01.741385    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:16:01.753930    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 22:16:01.753930    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 22:16:01.957772    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:16:01.970768    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:16:01.993752    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 22:16:01.993752    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 22:16:02.988660    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:16:03.019626    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:16:03.035923    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:16:03.044039    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 22:16:03.044262    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 22:16:03.650703    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0604 22:16:03.707378    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0604 22:16:03.771992    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:16:03.814098    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0604 22:16:03.866479    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:16:03.875718    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:16:03.919473    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:16:04.179873    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:16:04.213285    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:16:04.214196    4212 start.go:316] joinCluster: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:16:04.214196    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 22:16:04.214196    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:16:06.666880    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:16:06.666951    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:16:06.666951    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:16:09.558709    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:16:09.558777    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:16:09.558777    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:16:09.781417    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.5671752s)
	I0604 22:16:09.781615    4212 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:16:09.781661    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7bil88.zjhz80y1hcigx5ai --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m02 --control-plane --apiserver-advertise-address=172.20.128.86 --apiserver-bind-port=8443"
	I0604 22:16:57.246013    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7bil88.zjhz80y1hcigx5ai --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m02 --control-plane --apiserver-advertise-address=172.20.128.86 --apiserver-bind-port=8443": (47.4639242s)
	I0604 22:16:57.246013    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 22:16:58.222878    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500-m02 minikube.k8s.io/updated_at=2024_06_04T22_16_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=false
	I0604 22:16:58.414292    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-609500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0604 22:16:58.640783    4212 start.go:318] duration metric: took 54.4260215s to joinCluster
	I0604 22:16:58.643252    4212 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:16:58.643753    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:16:58.647467    4212 out.go:177] * Verifying Kubernetes components...
	I0604 22:16:58.667201    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:16:59.150379    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:16:59.186083    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:16:59.186921    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0604 22:16:59.186921    4212 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.143.254:8443 with https://172.20.131.101:8443
	I0604 22:16:59.186921    4212 node_ready.go:35] waiting up to 6m0s for node "ha-609500-m02" to be "Ready" ...
	I0604 22:16:59.186921    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:16:59.186921    4212 round_trippers.go:469] Request Headers:
	I0604 22:16:59.186921    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:16:59.186921    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:16:59.210049    4212 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0604 22:16:59.687912    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:16:59.687987    4212 round_trippers.go:469] Request Headers:
	I0604 22:16:59.687987    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:16:59.687987    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:16:59.697655    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:00.194170    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:00.194423    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:00.194423    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:00.194423    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:00.227784    4212 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0604 22:17:00.701704    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:00.701704    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:00.701704    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:00.701704    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:00.711351    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:01.192919    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:01.192919    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:01.192919    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:01.192919    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:01.197895    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:01.199208    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:01.700979    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:01.701038    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:01.701038    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:01.701038    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:01.706713    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:02.195319    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:02.195319    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:02.195431    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:02.195431    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:02.200649    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:02.689135    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:02.689135    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:02.689135    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:02.689135    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:02.696181    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:03.196854    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:03.196954    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:03.196954    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:03.197051    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:03.202815    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:03.203815    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:03.695109    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:03.695109    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:03.695109    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:03.695109    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:03.700864    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:04.199276    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:04.199276    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:04.199276    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:04.199471    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:04.207973    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:17:04.701178    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:04.701178    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:04.701178    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:04.701178    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:04.705829    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:05.190383    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:05.190383    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:05.190383    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:05.190383    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:05.201887    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:17:05.692236    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:05.692447    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:05.692447    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:05.692447    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:05.698708    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:05.699718    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:06.192378    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:06.192468    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:06.192468    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:06.192535    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:06.204989    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:17:06.694235    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:06.694235    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:06.694235    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:06.694235    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:06.699692    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:07.192336    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:07.192400    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:07.192400    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:07.192400    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:07.197761    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:07.692693    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:07.692693    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:07.692693    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:07.692782    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:07.700092    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:07.701738    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:08.193188    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:08.193507    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:08.193507    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:08.193507    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:08.201880    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:17:08.690715    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:08.690715    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:08.690715    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:08.690715    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:08.697376    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:09.190763    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:09.190862    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:09.190862    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:09.190862    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:09.195300    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:09.689887    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:09.690196    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:09.690268    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:09.690268    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:09.696352    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:10.192285    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.192375    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.192375    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.192375    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.199624    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:10.200672    4212 node_ready.go:49] node "ha-609500-m02" has status "Ready":"True"
	I0604 22:17:10.200672    4212 node_ready.go:38] duration metric: took 11.0136611s for node "ha-609500-m02" to be "Ready" ...
	I0604 22:17:10.200672    4212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:17:10.200672    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:10.200672    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.200672    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.200672    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.211622    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:10.219615    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.219615    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r68pn
	I0604 22:17:10.219615    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.219615    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.219615    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.226008    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:10.226876    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.226966    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.226966    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.226966    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.237753    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:10.238801    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.238801    4212 pod_ready.go:81] duration metric: took 19.1858ms for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.238801    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.238801    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zlxf9
	I0604 22:17:10.238801    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.238801    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.238801    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.242759    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:17:10.244277    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.244277    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.244342    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.244342    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.249763    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.250409    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.250470    4212 pod_ready.go:81] duration metric: took 11.669ms for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.250470    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.250600    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500
	I0604 22:17:10.250600    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.250652    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.250652    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.255754    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:10.256762    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.256762    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.256762    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.256762    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.261761    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.261761    4212 pod_ready.go:92] pod "etcd-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.261761    4212 pod_ready.go:81] duration metric: took 11.2905ms for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.261761    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.261761    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:10.262785    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.262785    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.262785    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.268764    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:10.269602    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.269602    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.269602    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.269602    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.274208    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.771269    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:10.771269    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.771357    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.771357    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.780548    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:10.782247    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.782273    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.782273    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.782273    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.787298    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:11.272412    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:11.272412    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.272412    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.272412    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.279009    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:11.280932    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:11.280932    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.280932    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.280932    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.286253    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:11.773440    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:11.773440    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.773440    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.773740    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.781369    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:11.782141    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:11.782141    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.782141    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.782141    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.787848    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.262761    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:12.262816    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.262816    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.262816    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.267374    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:12.268337    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.268417    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.268417    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.268417    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.273611    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.274384    4212 pod_ready.go:92] pod "etcd-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.274457    4212 pod_ready.go:81] duration metric: took 2.0126799s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.274517    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.274569    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:17:12.274569    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.274631    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.274631    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.279890    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.281362    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:12.281362    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.281362    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.281362    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.301490    4212 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0604 22:17:12.302614    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.302614    4212 pod_ready.go:81] duration metric: took 28.0959ms for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.302679    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.405617    4212 request.go:629] Waited for 102.5527ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:17:12.405741    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:17:12.405741    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.405883    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.405883    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.413274    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:12.594093    4212 request.go:629] Waited for 179.6756ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.594332    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.594398    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.594398    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.594398    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.600050    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.602171    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.602171    4212 pod_ready.go:81] duration metric: took 299.4892ms for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.602171    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.795792    4212 request.go:629] Waited for 193.3457ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:17:12.796133    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:17:12.796133    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.796133    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.796133    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.803958    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:13.000294    4212 request.go:629] Waited for 195.4506ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.000593    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.000593    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.000593    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.000593    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.007281    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:13.008476    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.008476    4212 pod_ready.go:81] duration metric: took 406.3018ms for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.008476    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.203245    4212 request.go:629] Waited for 194.7677ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:17:13.203388    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:17:13.203570    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.203570    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.203652    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.209765    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:13.406909    4212 request.go:629] Waited for 196.2426ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:13.406909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:13.406909    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.406909    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.406909    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.412813    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:13.413871    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.413871    4212 pod_ready.go:81] duration metric: took 405.3923ms for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.413871    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.593703    4212 request.go:629] Waited for 179.5938ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:17:13.593773    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:17:13.593773    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.593773    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.593773    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.601378    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:13.797459    4212 request.go:629] Waited for 194.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.797676    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.797676    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.797676    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.797676    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.803310    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:13.803495    4212 pod_ready.go:92] pod "kube-proxy-4ppxq" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.804023    4212 pod_ready.go:81] duration metric: took 390.1485ms for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.804023    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.001967    4212 request.go:629] Waited for 197.7062ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:17:14.002240    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:17:14.002329    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.002329    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.002329    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.009570    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:14.207109    4212 request.go:629] Waited for 196.5663ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:14.207230    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:14.207230    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.207230    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.207230    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.213965    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:14.215778    4212 pod_ready.go:92] pod "kube-proxy-fnjrb" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:14.215778    4212 pod_ready.go:81] duration metric: took 411.7517ms for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.215855    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.397073    4212 request.go:629] Waited for 180.9858ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:17:14.397471    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:17:14.397471    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.397471    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.397471    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.402648    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:14.603916    4212 request.go:629] Waited for 200.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:14.603916    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:14.603916    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.603916    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.603916    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.615931    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:17:14.616895    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:14.616895    4212 pod_ready.go:81] duration metric: took 401.0374ms for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.616895    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.797370    4212 request.go:629] Waited for 180.4733ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:17:14.797773    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:17:14.797773    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.797773    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.797773    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.803773    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:15.002138    4212 request.go:629] Waited for 197.9436ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:15.002438    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:15.002438    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.002500    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.002500    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.009674    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:15.010461    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:15.010461    4212 pod_ready.go:81] duration metric: took 393.5623ms for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:15.010461    4212 pod_ready.go:38] duration metric: took 4.8097504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:17:15.010604    4212 api_server.go:52] waiting for apiserver process to appear ...
	I0604 22:17:15.022686    4212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 22:17:15.056930    4212 api_server.go:72] duration metric: took 16.4133166s to wait for apiserver process to appear ...
	I0604 22:17:15.056930    4212 api_server.go:88] waiting for apiserver healthz status ...
	I0604 22:17:15.056930    4212 api_server.go:253] Checking apiserver healthz at https://172.20.131.101:8443/healthz ...
	I0604 22:17:15.064716    4212 api_server.go:279] https://172.20.131.101:8443/healthz returned 200:
	ok
	I0604 22:17:15.065474    4212 round_trippers.go:463] GET https://172.20.131.101:8443/version
	I0604 22:17:15.065716    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.065716    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.065716    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.066766    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:17:15.067704    4212 api_server.go:141] control plane version: v1.30.1
	I0604 22:17:15.067766    4212 api_server.go:131] duration metric: took 10.7739ms to wait for apiserver health ...
	I0604 22:17:15.067793    4212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 22:17:15.196808    4212 request.go:629] Waited for 128.9349ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.197297    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.197383    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.197383    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.197444    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.208819    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:17:15.216149    4212 system_pods.go:59] 17 kube-system pods found
	I0604 22:17:15.216149    4212 system_pods.go:61] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:17:15.216149    4212 system_pods.go:74] duration metric: took 148.3547ms to wait for pod list to return data ...
	I0604 22:17:15.216149    4212 default_sa.go:34] waiting for default service account to be created ...
	I0604 22:17:15.404541    4212 request.go:629] Waited for 188.1443ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:17:15.404541    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:17:15.404541    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.404680    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.404680    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.411164    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:15.411795    4212 default_sa.go:45] found service account: "default"
	I0604 22:17:15.411795    4212 default_sa.go:55] duration metric: took 195.6444ms for default service account to be created ...
	I0604 22:17:15.411871    4212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 22:17:15.592797    4212 request.go:629] Waited for 180.5579ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.592797    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.592797    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.593102    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.593102    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.603158    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:15.614390    4212 system_pods.go:86] 17 kube-system pods found
	I0604 22:17:15.615948    4212 system_pods.go:89] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:17:15.616225    4212 system_pods.go:126] duration metric: took 204.352ms to wait for k8s-apps to be running ...
	I0604 22:17:15.616278    4212 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 22:17:15.628979    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:17:15.656779    4212 system_svc.go:56] duration metric: took 40.5296ms WaitForService to wait for kubelet
	I0604 22:17:15.656903    4212 kubeadm.go:576] duration metric: took 17.0134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:17:15.657001    4212 node_conditions.go:102] verifying NodePressure condition ...
	I0604 22:17:15.797534    4212 request.go:629] Waited for 140.1458ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes
	I0604 22:17:15.797534    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes
	I0604 22:17:15.797534    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.797534    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.797534    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.804653    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:15.808049    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:17:15.808049    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:17:15.808049    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:17:15.808049    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:17:15.808049    4212 node_conditions.go:105] duration metric: took 151.0473ms to run NodePressure ...
	I0604 22:17:15.808049    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:17:15.808234    4212 start.go:254] writing updated cluster config ...
	I0604 22:17:15.811937    4212 out.go:177] 
	I0604 22:17:15.834811    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:17:15.835160    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:17:15.842246    4212 out.go:177] * Starting "ha-609500-m03" control-plane node in "ha-609500" cluster
	I0604 22:17:15.848542    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:17:15.848542    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:17:15.849390    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:17:15.849584    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:17:15.849816    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:17:15.856035    4212 start.go:360] acquireMachinesLock for ha-609500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:17:15.856035    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500-m03"
	I0604 22:17:15.856582    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:17:15.856737    4212 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0604 22:17:15.859037    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:17:15.859892    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:17:15.859934    4212 client.go:168] LocalClient.Create starting
	I0604 22:17:15.860111    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:17:15.860745    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:17:15.860745    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:17:15.860992    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:17:15.860992    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:17:15.861220    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:17:15.861289    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:17:18.016963    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:17:18.016963    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:18.017782    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:17:19.970477    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:17:19.970477    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:19.971416    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:17:25.950787    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:17:25.951607    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:25.953809    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:17:26.438118    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:17:26.772639    4212 main.go:141] libmachine: Creating VM...
	I0604 22:17:26.772639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:30.092345    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:32.067870    4212 main.go:141] libmachine: Creating VHD
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:17:36.743323    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 81B54312-716D-4BDF-B061-C6E0D21F153B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:17:36.743595    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:36.743595    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:17:36.743595    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:17:36.756623    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd' -SizeBytes 20000MB
	I0604 22:17:43.035011    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:43.036052    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:43.036205    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-609500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500-m03 -DynamicMemoryEnabled $false
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500-m03 -Count 2
	I0604 22:17:52.268998    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:52.268998    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:52.269303    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\boot2docker.iso'
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd'
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:58.219958    4212 main.go:141] libmachine: Starting VM...
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500-m03
	I0604 22:18:01.639702    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:01.640741    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:01.640741    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:18:01.640741    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:07.089391    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:07.089391    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:08.101634    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:10.565426    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:10.572139    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:10.572139    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:13.348244    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:13.348244    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:14.358758    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:16.743312    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:16.748999    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:16.748999    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:19.503540    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:19.506899    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:20.518435    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:25.688092    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:25.688092    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:26.703500    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:34.376539    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:34.389830    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:34.389899    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:18:34.389899    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:39.575206    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:39.575206    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:39.579720    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:39.594078    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:39.594078    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:18:39.729162    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:18:39.729162    4212 buildroot.go:166] provisioning hostname "ha-609500-m03"
	I0604 22:18:39.729162    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:42.031451    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:42.031451    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:42.043815    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:44.836817    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:44.847200    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:44.854009    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:44.854311    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:44.854311    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500-m03 && echo "ha-609500-m03" | sudo tee /etc/hostname
	I0604 22:18:45.021050    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500-m03
	
	I0604 22:18:45.021131    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:47.332759    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:47.332987    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:47.332987    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:50.067376    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:50.067376    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:50.075072    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:50.075072    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:50.075072    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:18:50.224074    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:18:50.224074    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:18:50.224074    4212 buildroot.go:174] setting up certificates
	I0604 22:18:50.224211    4212 provision.go:84] configureAuth start
	I0604 22:18:50.224211    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:55.298606    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:55.298685    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:55.298994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:00.387606    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:00.399609    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:00.399689    4212 provision.go:143] copyHostCerts
	I0604 22:19:00.399988    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:19:00.400328    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:19:00.400328    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:19:00.401153    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:19:00.402422    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:19:00.403237    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:19:00.403348    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:19:00.403648    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:19:00.404307    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:19:00.404843    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:19:00.404843    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:19:00.405260    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:19:00.406351    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500-m03 san=[127.0.0.1 172.20.138.190 ha-609500-m03 localhost minikube]
	I0604 22:19:00.852655    4212 provision.go:177] copyRemoteCerts
	I0604 22:19:00.879864    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:19:00.879864    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:05.968956    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:05.970403    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:05.970403    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:06.085194    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2052892s)
	I0604 22:19:06.085321    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:19:06.085897    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 22:19:06.137417    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:19:06.138044    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 22:19:06.190563    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:19:06.191140    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:19:06.250566    4212 provision.go:87] duration metric: took 16.0262271s to configureAuth
	I0604 22:19:06.250566    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:19:06.251500    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:19:06.251500    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:08.541517    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:08.541517    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:08.541648    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:11.305268    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:11.316493    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:11.324839    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:11.325508    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:11.325508    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:19:11.466380    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:19:11.466380    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:19:11.466380    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:19:11.466934    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:13.757387    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:13.757387    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:13.768947    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:16.568888    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:16.568888    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:16.575237    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:16.575237    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:16.575237    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.131.101"
	Environment="NO_PROXY=172.20.131.101,172.20.128.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:19:16.741579    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.131.101
	Environment=NO_PROXY=172.20.131.101,172.20.128.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:19:16.741855    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:19.046101    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:19.060818    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:19.060818    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:21.850153    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:21.861385    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:21.867169    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:21.868026    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:21.868149    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:19:24.113556    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:19:24.113556    4212 machine.go:97] duration metric: took 49.7232595s to provisionDockerMachine
	I0604 22:19:24.113556    4212 client.go:171] duration metric: took 2m8.2525918s to LocalClient.Create
	I0604 22:19:24.113700    4212 start.go:167] duration metric: took 2m8.2527778s to libmachine.API.Create "ha-609500"
	I0604 22:19:24.113700    4212 start.go:293] postStartSetup for "ha-609500-m03" (driver="hyperv")
	I0604 22:19:24.113700    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:19:24.127305    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:19:24.127305    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:29.219933    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:29.219933    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:29.219933    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:29.345343    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.217923s)
	I0604 22:19:29.356847    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:19:29.367528    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:19:29.367622    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:19:29.368323    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:19:29.369392    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:19:29.369392    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:19:29.380511    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:19:29.402994    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:19:29.457650    4212 start.go:296] duration metric: took 5.3439076s for postStartSetup
	I0604 22:19:29.460486    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:31.799369    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:31.811156    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:31.811156    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:34.635575    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:34.635663    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:34.636323    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:19:34.639081    4212 start.go:128] duration metric: took 2m18.7812291s to createHost
	I0604 22:19:34.639081    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:36.957034    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:36.957277    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:36.957277    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:39.726440    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:39.726440    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:39.732838    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:39.732984    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:39.733570    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:19:39.872822    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539579.879690082
	
	I0604 22:19:39.872977    4212 fix.go:216] guest clock: 1717539579.879690082
	I0604 22:19:39.872977    4212 fix.go:229] Guest: 2024-06-04 22:19:39.879690082 +0000 UTC Remote: 2024-06-04 22:19:34.6390814 +0000 UTC m=+621.179354901 (delta=5.240608682s)
	I0604 22:19:39.873095    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:42.205185    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:42.205185    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:42.217825    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:44.986647    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:44.986647    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:45.004810    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:45.005387    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:45.005387    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539579
	I0604 22:19:45.159494    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:19:39 UTC 2024
	
	I0604 22:19:45.159494    4212 fix.go:236] clock set: Tue Jun  4 22:19:39 UTC 2024
	 (err=<nil>)
	I0604 22:19:45.159619    4212 start.go:83] releasing machines lock for "ha-609500-m03", held for 2m29.3023864s
	I0604 22:19:45.159761    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:47.493325    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:47.502994    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:47.502994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:50.257860    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:50.257860    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:50.273017    4212 out.go:177] * Found network options:
	I0604 22:19:50.283263    4212 out.go:177]   - NO_PROXY=172.20.131.101,172.20.128.86
	W0604 22:19:50.286498    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.287069    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:19:50.289289    4212 out.go:177]   - NO_PROXY=172.20.131.101,172.20.128.86
	W0604 22:19:50.293476    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.293476    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.294961    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.295047    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:19:50.298563    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:19:50.298688    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:50.307138    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 22:19:50.307138    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:52.663975    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:52.664269    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:52.664332    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:52.665508    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:52.665629    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:52.665629    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:55.503539    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:55.515282    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:55.515689    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:55.532080    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:55.532080    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:55.532680    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:55.621432    4212 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3141393s)
	W0604 22:19:55.621543    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:19:55.636501    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:19:55.696426    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:19:55.696558    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:19:55.696426    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3977846s)
	I0604 22:19:55.696758    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:19:55.752022    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:19:55.793637    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:19:55.817754    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:19:55.836562    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:19:55.869709    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:19:55.905012    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:19:55.937060    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:19:55.973824    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:19:56.012148    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:19:56.045272    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:19:56.080610    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:19:56.118203    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:19:56.149240    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:19:56.184479    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:19:56.385530    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:19:56.421914    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:19:56.435861    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:19:56.480470    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:19:56.522280    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:19:56.569398    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:19:56.612881    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:19:56.654195    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:19:56.721171    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:19:56.747194    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:19:56.796409    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:19:56.818684    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:19:56.845534    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:19:56.894900    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:19:57.112407    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:19:57.321941    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:19:57.326112    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:19:57.384590    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:19:57.601077    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:20:00.153624    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5525266s)
	I0604 22:20:00.167916    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:20:00.209323    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:20:00.251128    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:20:00.461529    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:20:00.674098    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:00.898757    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:20:00.946907    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:20:00.988346    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:01.216717    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:20:01.340371    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:20:01.353399    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:20:01.362906    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:20:01.372779    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:20:01.398782    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:20:01.465797    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:20:01.476373    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:20:01.521856    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:20:01.562379    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:20:01.565177    4212 out.go:177]   - env NO_PROXY=172.20.131.101
	I0604 22:20:01.567782    4212 out.go:177]   - env NO_PROXY=172.20.131.101,172.20.128.86
	I0604 22:20:01.569825    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:20:01.575844    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:20:01.575844    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:20:01.589382    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:20:01.596412    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:20:01.620012    4212 mustload.go:65] Loading cluster: ha-609500
	I0604 22:20:01.620897    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:20:01.621115    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:03.936853    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:03.949554    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:03.949647    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:20:03.949949    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.138.190
	I0604 22:20:03.949949    4212 certs.go:194] generating shared ca certs ...
	I0604 22:20:03.949949    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:03.950898    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:20:03.951176    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:20:03.951421    4212 certs.go:256] generating profile certs ...
	I0604 22:20:03.952091    4212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:20:03.952175    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1
	I0604 22:20:03.952328    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.128.86 172.20.138.190 172.20.143.254]
	I0604 22:20:04.222840    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 ...
	I0604 22:20:04.222840    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1: {Name:mk10e3a4dacee8587b1af1c89003e8c486ec29a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:04.232879    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1 ...
	I0604 22:20:04.232879    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1: {Name:mk01b58f546b2519a6aab4b1ecb91801a6947cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:04.233443    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:20:04.244243    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:20:04.246796    4212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:20:04.246796    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:20:04.247850    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:20:04.248537    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:20:04.248579    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:20:04.248870    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:20:04.249234    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:20:04.249691    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:20:04.249732    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:20:04.249982    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:20:04.250462    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:20:04.250665    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:20:04.250894    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:20:04.250894    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:04.251518    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:20:04.251780    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:20:04.252050    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:20:09.360026    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:20:09.360026    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:09.360898    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:20:09.461548    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0604 22:20:09.469270    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0604 22:20:09.503950    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0604 22:20:09.510557    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0604 22:20:09.550240    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0604 22:20:09.560149    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0604 22:20:09.596920    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0604 22:20:09.606555    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0604 22:20:09.641751    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0604 22:20:09.649999    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0604 22:20:09.684273    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0604 22:20:09.692229    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0604 22:20:09.713270    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:20:09.761078    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:20:09.810381    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:20:09.864999    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:20:09.918456    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0604 22:20:09.981883    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:20:10.040937    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:20:10.095167    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:20:10.156074    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:20:10.206754    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:20:10.257371    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:20:10.307878    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0604 22:20:10.342725    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0604 22:20:10.377622    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0604 22:20:10.414202    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0604 22:20:10.449312    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0604 22:20:10.485650    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0604 22:20:10.522275    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0604 22:20:10.568613    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:20:10.592350    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:20:10.629394    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.637578    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.649327    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.677162    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:20:10.714204    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:20:10.747727    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.756578    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.771672    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.796870    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:20:10.832848    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:20:10.867279    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.875875    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.893326    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.917107    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:20:10.953524    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:20:10.963180    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:20:10.963705    4212 kubeadm.go:928] updating node {m03 172.20.138.190 8443 v1.30.1 docker true true} ...
	I0604 22:20:10.963991    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.138.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:20:10.964078    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:20:10.976416    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:20:11.005894    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:20:11.005894    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:20:11.019270    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:20:11.035691    4212 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 22:20:11.047718    4212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 22:20:11.069777    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0604 22:20:11.070002    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0604 22:20:11.069777    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0604 22:20:11.070285    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:20:11.070133    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:20:11.086586    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:20:11.086586    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:20:11.088547    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:20:11.095652    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 22:20:11.095813    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 22:20:11.134796    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:20:11.134796    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 22:20:11.134956    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 22:20:11.150016    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:20:11.196723    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 22:20:11.197065    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 22:20:12.583443    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0604 22:20:12.604101    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0604 22:20:12.639649    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:20:12.674681    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0604 22:20:12.724903    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:20:12.734124    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:20:12.774188    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:12.991005    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:20:13.026934    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:20:13.027749    4212 start.go:316] joinCluster: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:20:13.027966    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 22:20:13.027966    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:15.416063    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:15.427680    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:15.427778    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:20:18.214038    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:20:18.214038    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:18.227310    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:20:18.463986    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.4359772s)
	I0604 22:20:18.463986    4212 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:20:18.463986    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jx6qm8.mzmojl3pfbuz827c --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m03 --control-plane --apiserver-advertise-address=172.20.138.190 --apiserver-bind-port=8443"
	I0604 22:21:03.632199    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jx6qm8.mzmojl3pfbuz827c --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m03 --control-plane --apiserver-advertise-address=172.20.138.190 --apiserver-bind-port=8443": (45.1678572s)
	I0604 22:21:03.632199    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 22:21:04.602789    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500-m03 minikube.k8s.io/updated_at=2024_06_04T22_21_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=false
	I0604 22:21:05.083489    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-609500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0604 22:21:05.264102    4212 start.go:318] duration metric: took 52.2359416s to joinCluster
	I0604 22:21:05.264431    4212 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:21:05.268143    4212 out.go:177] * Verifying Kubernetes components...
	I0604 22:21:05.265648    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:21:05.283509    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:21:05.730998    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:21:05.784578    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:21:05.784578    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0604 22:21:05.784578    4212 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.143.254:8443 with https://172.20.131.101:8443
	I0604 22:21:05.786260    4212 node_ready.go:35] waiting up to 6m0s for node "ha-609500-m03" to be "Ready" ...
	I0604 22:21:05.786455    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:05.786455    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:05.786455    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:05.786455    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:05.806924    4212 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 22:21:06.298213    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:06.298213    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:06.298213    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:06.298213    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:06.305051    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:06.796635    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:06.796733    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:06.796733    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:06.796733    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:06.804662    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:21:07.297865    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:07.297865    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:07.297865    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:07.297865    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:07.302670    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:07.799840    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:07.800046    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:07.800046    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:07.800046    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:07.805636    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:07.806392    4212 node_ready.go:53] node "ha-609500-m03" has status "Ready":"False"
	I0604 22:21:08.298024    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:08.298024    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:08.298024    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:08.298024    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:08.418517    4212 round_trippers.go:574] Response Status: 200 OK in 120 milliseconds
	I0604 22:21:08.794778    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:08.794778    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:08.794778    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:08.794778    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:08.800217    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:09.296395    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:09.296395    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:09.296513    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:09.296513    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:09.296820    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:09.800120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:09.800120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:09.800120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:09.800120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:09.800884    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:09.807493    4212 node_ready.go:53] node "ha-609500-m03" has status "Ready":"False"
	I0604 22:21:10.288008    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:10.288008    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:10.288008    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:10.288008    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:10.288552    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:10.798569    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:10.798569    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:10.798569    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:10.798569    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:10.803854    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:11.306200    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:11.306200    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.306200    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.306200    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.306761    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.792415    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:11.792534    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.792534    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.792534    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.792819    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.798258    4212 node_ready.go:49] node "ha-609500-m03" has status "Ready":"True"
	I0604 22:21:11.798358    4212 node_ready.go:38] duration metric: took 6.011951s for node "ha-609500-m03" to be "Ready" ...
	I0604 22:21:11.798358    4212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:21:11.798538    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:11.798538    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.798538    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.798538    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.824279    4212 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0604 22:21:11.840924    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.840924    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r68pn
	I0604 22:21:11.840924    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.841488    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.841488    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.847420    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:11.848349    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.848349    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.848349    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.848349    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.853687    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:11.854843    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.854843    4212 pod_ready.go:81] duration metric: took 13.9189ms for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.854843    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.855179    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zlxf9
	I0604 22:21:11.855253    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.855253    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.855253    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.859775    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.861207    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.861207    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.861207    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.861207    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.861734    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.865664    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.865664    4212 pod_ready.go:81] duration metric: took 10.8201ms for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.865664    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.865664    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500
	I0604 22:21:11.865664    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.865664    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.865664    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.870430    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:11.871620    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.871678    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.871678    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.871678    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.875250    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:21:11.876098    4212 pod_ready.go:92] pod "etcd-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.876098    4212 pod_ready.go:81] duration metric: took 10.4346ms for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.876098    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.876098    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:21:11.876098    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.876098    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.876098    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.876723    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.882006    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:11.882006    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.882552    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.882552    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.904727    4212 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0604 22:21:11.905266    4212 pod_ready.go:92] pod "etcd-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.905266    4212 pod_ready.go:81] duration metric: took 29.1673ms for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.905266    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.995852    4212 request.go:629] Waited for 89.8851ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:11.995852    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:11.995852    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.995852    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.995852    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.002111    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:12.198529    4212 request.go:629] Waited for 195.5625ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.198529    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.198529    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.198529    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.198529    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.204359    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:12.413883    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:12.414010    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.414129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.414129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.414364    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:12.599281    4212 request.go:629] Waited for 179.2562ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.599457    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.599457    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.599542    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.599542    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.600255    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:12.906854    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:12.906854    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.906854    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.906854    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.912387    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:12.999791    4212 request.go:629] Waited for 86.4028ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.999791    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.999791    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.999791    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.999791    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.001813    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:13.006826    4212 pod_ready.go:92] pod "etcd-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.006826    4212 pod_ready.go:81] duration metric: took 1.1015518s for pod "etcd-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.006826    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.213243    4212 request.go:629] Waited for 206.2963ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:21:13.213530    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:21:13.213530    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.213530    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.213626    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.214244    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:13.401588    4212 request.go:629] Waited for 179.1495ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:13.401663    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:13.401663    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.401663    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.401663    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.416157    4212 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 22:21:13.417245    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.417245    4212 pod_ready.go:81] duration metric: took 410.4156ms for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.417245    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.597382    4212 request.go:629] Waited for 179.9072ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:21:13.597546    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:21:13.597546    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.597546    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.597546    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.604877    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:21:13.793118    4212 request.go:629] Waited for 186.8701ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:13.793291    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:13.793291    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.793291    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.793291    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.793987    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:13.799123    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.799243    4212 pod_ready.go:81] duration metric: took 381.9947ms for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.799243    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:14.006171    4212 request.go:629] Waited for 206.7415ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.006276    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.006276    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.006276    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.006356    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.006536    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:14.206100    4212 request.go:629] Waited for 191.4948ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.206100    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.206100    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.206100    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.206100    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.208482    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:14.403445    4212 request.go:629] Waited for 89.7208ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.403580    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.403580    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.403580    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.403580    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.407972    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:14.604368    4212 request.go:629] Waited for 196.2793ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.604495    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.604550    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.604550    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.604550    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.625737    4212 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0604 22:21:14.810754    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.810754    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.810754    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.810754    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.811278    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.018120    4212 request.go:629] Waited for 199.3575ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.018120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.018120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.018120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.018120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.023938    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:15.309397    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:15.309653    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.309653    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.309653    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.310201    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.400406    4212 request.go:629] Waited for 82.2411ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.400406    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.400406    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.400406    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.400406    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.401139    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.813243    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:15.813243    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.813243    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.813243    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.818579    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:15.819951    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.819951    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.819951    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.819951    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.826367    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:15.827528    4212 pod_ready.go:102] pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace has status "Ready":"False"
	I0604 22:21:16.301129    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:16.301129    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.301129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.301129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.308116    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:16.309451    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:16.309451    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.309451    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.309451    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.312948    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:21:16.803178    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:16.803178    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.803178    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.803178    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.812310    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:21:16.813108    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:16.813108    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.813108    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.813108    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.816085    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:17.312731    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:17.312731    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.312731    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.312731    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.321414    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:17.324363    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:17.324448    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.324448    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.324448    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.327430    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:17.332264    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.332320    4212 pod_ready.go:81] duration metric: took 3.5330495s for pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.332320    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.332431    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:21:17.332490    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.332490    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.332539    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.333000    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:17.404977    4212 request.go:629] Waited for 67.1812ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:17.405184    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:17.405184    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.405184    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.405184    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.405850    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:17.412283    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.412283    4212 pod_ready.go:81] duration metric: took 79.9631ms for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.412283    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.603229    4212 request.go:629] Waited for 190.7485ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:21:17.603909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:21:17.603909    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.604011    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.604011    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.609529    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:17.800300    4212 request.go:629] Waited for 190.1319ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:17.800909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:17.800909    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.800909    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.800909    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.809106    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:21:17.809714    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.809714    4212 pod_ready.go:81] duration metric: took 397.4278ms for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.809714    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.003120    4212 request.go:629] Waited for 193.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m03
	I0604 22:21:18.003120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m03
	I0604 22:21:18.003120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.003120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.003120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.003678    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.198781    4212 request.go:629] Waited for 188.9792ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:18.198781    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:18.198781    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.198781    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.198781    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.199317    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.207545    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:18.207545    4212 pod_ready.go:81] duration metric: took 397.8278ms for pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.207848    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.406090    4212 request.go:629] Waited for 197.9927ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:21:18.406090    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:21:18.406090    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.406090    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.406090    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.406554    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.603123    4212 request.go:629] Waited for 196.5256ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:18.603123    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:18.603123    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.603123    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.603371    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.611599    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:21:18.612557    4212 pod_ready.go:92] pod "kube-proxy-4ppxq" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:18.612557    4212 pod_ready.go:81] duration metric: took 404.7052ms for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.612557    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.800312    4212 request.go:629] Waited for 187.5633ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:21:18.800486    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:21:18.800486    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.800486    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.800486    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.801136    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.011197    4212 request.go:629] Waited for 203.8766ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:19.011810    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:19.011907    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.011907    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.011907    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.012453    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.018359    4212 pod_ready.go:92] pod "kube-proxy-fnjrb" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.018359    4212 pod_ready.go:81] duration metric: took 405.7989ms for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.018910    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqpzs" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.197035    4212 request.go:629] Waited for 177.8832ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqpzs
	I0604 22:21:19.197097    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqpzs
	I0604 22:21:19.197097    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.197097    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.197097    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.203782    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.407628    4212 request.go:629] Waited for 202.9694ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:19.407628    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:19.407628    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.407628    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.407628    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.413344    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:19.416343    4212 pod_ready.go:92] pod "kube-proxy-mqpzs" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.416343    4212 pod_ready.go:81] duration metric: took 397.4297ms for pod "kube-proxy-mqpzs" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.416642    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.599114    4212 request.go:629] Waited for 182.4278ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:21:19.599114    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:21:19.599361    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.599431    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.599473    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.605255    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:19.796830    4212 request.go:629] Waited for 190.8505ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:19.797099    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:19.797099    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.797099    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.797099    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.797589    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.802572    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.802572    4212 pod_ready.go:81] duration metric: took 385.9273ms for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.802572    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.009884    4212 request.go:629] Waited for 206.4162ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:21:20.009884    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:21:20.010024    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.010024    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.010024    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.020874    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:21:20.202223    4212 request.go:629] Waited for 179.1886ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:20.202505    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:20.202580    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.202580    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.202580    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.214692    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:21:20.215413    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:20.215459    4212 pod_ready.go:81] duration metric: took 412.316ms for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.215540    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.401431    4212 request.go:629] Waited for 185.8895ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m03
	I0604 22:21:20.401668    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m03
	I0604 22:21:20.401668    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.401668    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.401668    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.405898    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:20.596395    4212 request.go:629] Waited for 186.4102ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:20.596395    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:20.596395    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.596395    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.596395    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.597016    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:20.602923    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:20.602923    4212 pod_ready.go:81] duration metric: took 387.3796ms for pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.602923    4212 pod_ready.go:38] duration metric: took 8.8044963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:21:20.602923    4212 api_server.go:52] waiting for apiserver process to appear ...
	I0604 22:21:20.616235    4212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 22:21:20.643845    4212 api_server.go:72] duration metric: took 15.3791645s to wait for apiserver process to appear ...
	I0604 22:21:20.643920    4212 api_server.go:88] waiting for apiserver healthz status ...
	I0604 22:21:20.643977    4212 api_server.go:253] Checking apiserver healthz at https://172.20.131.101:8443/healthz ...
	I0604 22:21:20.655248    4212 api_server.go:279] https://172.20.131.101:8443/healthz returned 200:
	ok
	I0604 22:21:20.655367    4212 round_trippers.go:463] GET https://172.20.131.101:8443/version
	I0604 22:21:20.655367    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.655367    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.655367    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.657003    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:20.657063    4212 api_server.go:141] control plane version: v1.30.1
	I0604 22:21:20.657157    4212 api_server.go:131] duration metric: took 13.2365ms to wait for apiserver health ...
	I0604 22:21:20.657157    4212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 22:21:20.795062    4212 request.go:629] Waited for 137.487ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:20.795062    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:20.795062    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.795062    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.795062    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.809292    4212 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 22:21:20.819695    4212 system_pods.go:59] 24 kube-system pods found
	I0604 22:21:20.819695    4212 system_pods.go:61] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "etcd-ha-609500-m03" [2a048691-b672-40ce-a5de-bddb99ba0246] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-bpml8" [c8881f19-8b7c-4de7-90e6-0b77affa003b] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500-m03" [c56ed0b7-dce0-4628-886c-7b078c99aa57] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m03" [99f40329-5004-4302-b9e3-71b3c33323e4] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-mqpzs" [38dd642e-4689-4125-8cfe-48f08039d3d7] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-scheduler-ha-609500-m03" [026ddba3-e162-44e7-8ceb-1cc50ad79708] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500-m03" [f5e7a6dc-d055-425a-bd95-1e7da9341c97] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:21:20.820705    4212 system_pods.go:74] duration metric: took 163.3643ms to wait for pod list to return data ...
	I0604 22:21:20.820705    4212 default_sa.go:34] waiting for default service account to be created ...
	I0604 22:21:21.007522    4212 request.go:629] Waited for 186.8159ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:21:21.007756    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:21:21.007756    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.007756    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.007864    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.014716    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:21.014716    4212 default_sa.go:45] found service account: "default"
	I0604 22:21:21.014716    4212 default_sa.go:55] duration metric: took 194.0096ms for default service account to be created ...
	I0604 22:21:21.014716    4212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 22:21:21.206499    4212 request.go:629] Waited for 191.5686ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:21.206665    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:21.206665    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.206665    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.206665    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.218328    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:21:21.230088    4212 system_pods.go:86] 24 kube-system pods found
	I0604 22:21:21.230088    4212 system_pods.go:89] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500-m03" [2a048691-b672-40ce-a5de-bddb99ba0246] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-bpml8" [c8881f19-8b7c-4de7-90e6-0b77affa003b] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500-m03" [c56ed0b7-dce0-4628-886c-7b078c99aa57] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m03" [99f40329-5004-4302-b9e3-71b3c33323e4] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-mqpzs" [38dd642e-4689-4125-8cfe-48f08039d3d7] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500-m03" [026ddba3-e162-44e7-8ceb-1cc50ad79708] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "kube-vip-ha-609500-m03" [f5e7a6dc-d055-425a-bd95-1e7da9341c97] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:21:21.234361    4212 system_pods.go:126] duration metric: took 219.6437ms to wait for k8s-apps to be running ...
	I0604 22:21:21.234488    4212 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 22:21:21.249488    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:21:21.283205    4212 system_svc.go:56] duration metric: took 48.7166ms WaitForService to wait for kubelet
	I0604 22:21:21.283205    4212 kubeadm.go:576] duration metric: took 16.018519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:21:21.283205    4212 node_conditions.go:102] verifying NodePressure condition ...
	I0604 22:21:21.398025    4212 request.go:629] Waited for 114.6923ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes
	I0604 22:21:21.398110    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes
	I0604 22:21:21.398110    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.398110    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.398110    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.398843    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:21.404671    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:105] duration metric: took 121.5919ms to run NodePressure ...
	I0604 22:21:21.404797    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:21:21.404870    4212 start.go:254] writing updated cluster config ...
	I0604 22:21:21.417929    4212 ssh_runner.go:195] Run: rm -f paused
	I0604 22:21:21.581491    4212 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 22:21:21.588650    4212 out.go:177] * Done! kubectl is now configured to use "ha-609500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.273946534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.298476257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.298551958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.298567758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.298722859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/07fc3250e17fc2be52599b3a5ac65f4c57112114874ffd14b7a25b577522b9c1/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 22:12:59 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:12:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/55963b43f59f0610447f6d95d5bf45f6738008e8f6a85333398d4c8bd26a6e40/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.964337806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.964583607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.965539709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.965999010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995534182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995726482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995768782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995880082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702018077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702113078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702133278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702297379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:22:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd105bf931810d5094fe400f19d7941e4062c0e8296b59dc0adb294e6d176eca/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 04 22:22:05 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:22:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620169490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620315691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620370392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620498092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43eb245091a16       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   cd105bf931810       busybox-fc5497c4f-m2dsk
	331200672b900       cbb01a7bd410d                                                                                         10 minutes ago       Running             coredns                   0                   07fc3250e17fc       coredns-7db6d8ff4d-zlxf9
	354d29cc4ee64       cbb01a7bd410d                                                                                         10 minutes ago       Running             coredns                   0                   55963b43f59f0       coredns-7db6d8ff4d-r68pn
	b2e01578bf279       6e38f40d628db                                                                                         10 minutes ago       Running             storage-provisioner       0                   f868c2e89359e       storage-provisioner
	eab704b102c1e       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              10 minutes ago       Running             kindnet-cni               0                   eb5fec61a1850       kindnet-phj2j
	27ad26efaa029       747097150317f                                                                                         10 minutes ago       Running             kube-proxy                0                   04f2353b96c8e       kube-proxy-4ppxq
	fc670e59a57fc       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   8858e6f093ca0       kube-vip-ha-609500
	150d0f1df1f9b       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   f661270f19b99       etcd-ha-609500
	e9cca2562827d       a52dc94f0a912                                                                                         10 minutes ago       Running             kube-scheduler            0                   a2207aa685938       kube-scheduler-ha-609500
	ca3f58b82ea71       25a1387cdab82                                                                                         10 minutes ago       Running             kube-controller-manager   0                   8e72949429c8a       kube-controller-manager-ha-609500
	469104c1a293e       91be940803172                                                                                         10 minutes ago       Running             kube-apiserver            0                   f15ef59ba79a5       kube-apiserver-ha-609500
	
	
	==> coredns [331200672b90] <==
	[INFO] 10.244.0.4:36499 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.05361587s
	[INFO] 10.244.1.2:37918 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167002s
	[INFO] 10.244.1.2:38784 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000781s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.039238244s
	[INFO] 10.244.2.2:39799 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.128748783s
	[INFO] 10.244.2.2:33241 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271902s
	[INFO] 10.244.2.2:43971 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184102s
	[INFO] 10.244.2.2:41575 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131401s
	[INFO] 10.244.0.4:43592 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164401s
	[INFO] 10.244.0.4:52666 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249401s
	[INFO] 10.244.0.4:59874 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075101s
	[INFO] 10.244.1.2:58128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218902s
	[INFO] 10.244.1.2:52271 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000733s
	[INFO] 10.244.1.2:39420 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000298302s
	[INFO] 10.244.1.2:44136 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132s
	[INFO] 10.244.1.2:57848 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000594s
	[INFO] 10.244.2.2:57496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001109s
	[INFO] 10.244.0.4:35893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161801s
	[INFO] 10.244.0.4:33714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137701s
	[INFO] 10.244.2.2:50485 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126001s
	[INFO] 10.244.0.4:43903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335002s
	[INFO] 10.244.0.4:59114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119901s
	[INFO] 10.244.1.2:47087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347203s
	[INFO] 10.244.1.2:41196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082001s
	[INFO] 10.244.1.2:38000 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000596s
	
	
	==> coredns [354d29cc4ee6] <==
	[INFO] 10.244.2.2:37187 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012340285s
	[INFO] 10.244.2.2:34193 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001227s
	[INFO] 10.244.0.4:38135 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014181198s
	[INFO] 10.244.0.4:36529 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136301s
	[INFO] 10.244.0.4:46892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000100601s
	[INFO] 10.244.0.4:44799 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181301s
	[INFO] 10.244.0.4:50435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001225s
	[INFO] 10.244.1.2:49492 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072701s
	[INFO] 10.244.1.2:38408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000483004s
	[INFO] 10.244.1.2:35903 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000623s
	[INFO] 10.244.2.2:54488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000793s
	[INFO] 10.244.2.2:33208 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096301s
	[INFO] 10.244.2.2:47293 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000616s
	[INFO] 10.244.0.4:44019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068s
	[INFO] 10.244.0.4:47749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000636304s
	[INFO] 10.244.1.2:45546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119s
	[INFO] 10.244.1.2:60098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079s
	[INFO] 10.244.1.2:59963 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058901s
	[INFO] 10.244.1.2:59268 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059s
	[INFO] 10.244.2.2:49237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277802s
	[INFO] 10.244.2.2:54226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119301s
	[INFO] 10.244.2.2:38788 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092001s
	[INFO] 10.244.0.4:36682 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001257s
	[INFO] 10.244.0.4:50471 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159701s
	[INFO] 10.244.1.2:40217 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139301s
	
	
	==> describe nodes <==
	Name:               ha-609500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T22_12_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:12:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:23:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:22:32 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:22:32 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:22:32 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:22:32 +0000   Tue, 04 Jun 2024 22:12:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.131.101
	  Hostname:    ha-609500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5174b038796a4663b8fdcff3502fbd2e
	  System UUID:                4fe51a0c-e109-9f4f-897a-12b5e0a75135
	  Boot ID:                    44531ec2-8568-49af-b4f3-f119c23323a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m2dsk              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 coredns-7db6d8ff4d-r68pn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-zlxf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-609500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-phj2j                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-609500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-609500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-4ppxq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-609500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-609500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-609500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-609500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-609500 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-609500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-609500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-609500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-609500 status is now: NodeReady
	  Normal  RegisteredNode           5m58s              node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	  Normal  RegisteredNode           112s               node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	
	
	Name:               ha-609500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T22_16_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:16:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:23:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:22:28 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:22:28 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:22:28 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:22:28 +0000   Tue, 04 Jun 2024 22:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.128.86
	  Hostname:    ha-609500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ae6337051f143178855fd9d2477b35d
	  System UUID:                16a9419c-11c4-e04e-8422-6e5fd7629acf
	  Boot ID:                    e0f80e3c-2f9d-4d80-a96e-da68cc478a81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qm589                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 etcd-ha-609500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m17s
	  kube-system                 kindnet-7plk9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m21s
	  kube-system                 kube-apiserver-ha-609500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-controller-manager-ha-609500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-proxy-fnjrb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-scheduler-ha-609500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-vip-ha-609500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-609500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-609500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-609500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	  Normal  RegisteredNode           5m58s                  node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	  Normal  RegisteredNode           112s                   node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	
	
	Name:               ha-609500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T22_21_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:20:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:23:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:22:27 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:22:27 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:22:27 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:22:27 +0000   Tue, 04 Jun 2024 22:21:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.138.190
	  Hostname:    ha-609500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 abe5455664c245a586d7fcd510ba03a9
	  System UUID:                a8a48a27-0986-8744-b933-d146cf528029
	  Boot ID:                    2bec906f-0529-42a4-a365-7362054d68ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gbl9h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 etcd-ha-609500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m15s
	  kube-system                 kindnet-bpml8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m17s
	  kube-system                 kube-apiserver-ha-609500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-controller-manager-ha-609500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-proxy-mqpzs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-scheduler-ha-609500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-vip-ha-609500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node ha-609500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node ha-609500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m17s)  kubelet          Node ha-609500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m13s                  node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	  Normal  RegisteredNode           2m13s                  node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	  Normal  RegisteredNode           112s                   node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	
	
	==> dmesg <==
	[  +7.378679] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 4 22:11] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.201942] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +32.976318] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.103649] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.603333] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.213394] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.252332] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.866463] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.198410] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.199632] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.292667] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[Jun 4 22:12] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.110759] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.497531] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +7.392152] systemd-fstab-generator[1730]: Ignoring "noauto" option for root device
	[  +0.113219] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.355782] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.214130] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[ +14.770971] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.270010] kauditd_printk_skb: 29 callbacks suppressed
	[Jun 4 22:17] kauditd_printk_skb: 26 callbacks suppressed
	[Jun 4 22:22] hrtimer: interrupt took 6651046 ns
	
	
	==> etcd [150d0f1df1f9] <==
	{"level":"warn","ts":"2024-06-04T22:21:01.936721Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"8cfef8e34c568672","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-04T22:21:03.442489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2febc9c58da119ed switched to configuration voters=(3453075389631502829 10159831464516421234 12975333203473944227)"}
	{"level":"info","ts":"2024-06-04T22:21:03.442605Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"920e01f874df351e","local-member-id":"2febc9c58da119ed"}
	{"level":"info","ts":"2024-06-04T22:21:03.442632Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"2febc9c58da119ed","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"8cfef8e34c568672"}
	{"level":"warn","ts":"2024-06-04T22:21:04.954409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.401461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-609500-m03\" ","response":"range_response_count:1 size:3669"}
	{"level":"info","ts":"2024-06-04T22:21:04.954556Z","caller":"traceutil/trace.go:171","msg":"trace[452653956] range","detail":"{range_begin:/registry/minions/ha-609500-m03; range_end:; response_count:1; response_revision:1651; }","duration":"190.584862ms","start":"2024-06-04T22:21:04.763957Z","end":"2024-06-04T22:21:04.954542Z","steps":["trace[452653956] 'range keys from in-memory index tree'  (duration: 188.464955ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:21:05.067415Z","caller":"traceutil/trace.go:171","msg":"trace[1950237184] transaction","detail":"{read_only:false; response_revision:1652; number_of_response:1; }","duration":"101.8978ms","start":"2024-06-04T22:21:04.9655Z","end":"2024-06-04T22:21:05.067398Z","steps":["trace[1950237184] 'process raft request'  (duration: 101.7545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T22:21:08.424269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.941041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-609500-m03\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-06-04T22:21:08.424407Z","caller":"traceutil/trace.go:171","msg":"trace[1622558986] range","detail":"{range_begin:/registry/minions/ha-609500-m03; range_end:; response_count:1; response_revision:1665; }","duration":"116.251542ms","start":"2024-06-04T22:21:08.308141Z","end":"2024-06-04T22:21:08.424393Z","steps":["trace[1622558986] 'agreement among raft nodes before linearized reading'  (duration: 115.907341ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:21:08.574389Z","caller":"traceutil/trace.go:171","msg":"trace[725127580] transaction","detail":"{read_only:false; response_revision:1666; number_of_response:1; }","duration":"116.118242ms","start":"2024-06-04T22:21:08.458254Z","end":"2024-06-04T22:21:08.574372Z","steps":["trace[725127580] 'process raft request'  (duration: 115.929741ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T22:22:02.600239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.373801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-gbl9h\" ","response":"range_response_count:1 size:2184"}
	{"level":"info","ts":"2024-06-04T22:22:02.600329Z","caller":"traceutil/trace.go:171","msg":"trace[1023604352] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-gbl9h; range_end:; response_count:1; response_revision:1835; }","duration":"101.470902ms","start":"2024-06-04T22:22:02.498845Z","end":"2024-06-04T22:22:02.600316Z","steps":["trace[1023604352] 'agreement among raft nodes before linearized reading'  (duration: 101.343601ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:22:03.367755Z","caller":"traceutil/trace.go:171","msg":"trace[2068861496] transaction","detail":"{read_only:false; response_revision:1911; number_of_response:1; }","duration":"103.98942ms","start":"2024-06-04T22:22:03.263587Z","end":"2024-06-04T22:22:03.367577Z","steps":["trace[2068861496] 'process raft request'  (duration: 103.819318ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:22:03.368992Z","caller":"traceutil/trace.go:171","msg":"trace[621648563] linearizableReadLoop","detail":"{readStateIndex:2182; appliedIndex:2186; }","duration":"108.214653ms","start":"2024-06-04T22:22:03.260767Z","end":"2024-06-04T22:22:03.368982Z","steps":["trace[621648563] 'read index received'  (duration: 108.209553ms)","trace[621648563] 'applied index is now lower than readState.Index'  (duration: 4.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T22:22:03.369265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.646556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/default/busybox-fc5497c4f\" ","response":"range_response_count:1 size:2013"}
	{"level":"info","ts":"2024-06-04T22:22:03.369366Z","caller":"traceutil/trace.go:171","msg":"trace[1350490518] range","detail":"{range_begin:/registry/replicasets/default/busybox-fc5497c4f; range_end:; response_count:1; response_revision:1911; }","duration":"108.782158ms","start":"2024-06-04T22:22:03.260574Z","end":"2024-06-04T22:22:03.369356Z","steps":["trace[1350490518] 'agreement among raft nodes before linearized reading'  (duration: 108.508956ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:22:24.219158Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1086}
	{"level":"info","ts":"2024-06-04T22:22:24.348977Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1086,"took":"129.046876ms","hash":584731514,"current-db-size-bytes":3612672,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2236416,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-04T22:22:24.349185Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":584731514,"revision":1086,"compact-revision":-1}
	{"level":"info","ts":"2024-06-04T22:23:10.375901Z","caller":"traceutil/trace.go:171","msg":"trace[25909619] linearizableReadLoop","detail":"{readStateIndex:2427; appliedIndex:2427; }","duration":"126.557823ms","start":"2024-06-04T22:23:10.249326Z","end":"2024-06-04T22:23:10.375884Z","steps":["trace[25909619] 'read index received'  (duration: 126.542123ms)","trace[25909619] 'applied index is now lower than readState.Index'  (duration: 14.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T22:23:10.376107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.760224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-04T22:23:10.376165Z","caller":"traceutil/trace.go:171","msg":"trace[1444456591] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2098; }","duration":"126.861126ms","start":"2024-06-04T22:23:10.249294Z","end":"2024-06-04T22:23:10.376155Z","steps":["trace[1444456591] 'agreement among raft nodes before linearized reading'  (duration: 126.757825ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:23:10.376717Z","caller":"traceutil/trace.go:171","msg":"trace[1705235004] transaction","detail":"{read_only:false; response_revision:2099; number_of_response:1; }","duration":"135.609682ms","start":"2024-06-04T22:23:10.241096Z","end":"2024-06-04T22:23:10.376706Z","steps":["trace[1705235004] 'process raft request'  (duration: 135.470381ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T22:23:10.380543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.464942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.20.131.101\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-06-04T22:23:10.380609Z","caller":"traceutil/trace.go:171","msg":"trace[1169356073] range","detail":"{range_begin:/registry/masterleases/172.20.131.101; range_end:; response_count:1; response_revision:2099; }","duration":"129.600344ms","start":"2024-06-04T22:23:10.250998Z","end":"2024-06-04T22:23:10.380599Z","steps":["trace[1169356073] 'agreement among raft nodes before linearized reading'  (duration: 129.486143ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:23:12 up 13 min,  0 users,  load average: 0.63, 0.53, 0.34
	Linux ha-609500 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eab704b102c1] <==
	I0604 22:22:24.408484       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:22:34.417253       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:22:34.417376       1 main.go:227] handling current node
	I0604 22:22:34.417391       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:22:34.417399       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:22:34.417545       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:22:34.417559       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:22:44.435441       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:22:44.435472       1 main.go:227] handling current node
	I0604 22:22:44.435497       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:22:44.435502       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:22:44.435766       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:22:44.435967       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:22:54.453069       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:22:54.453170       1 main.go:227] handling current node
	I0604 22:22:54.453187       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:22:54.453195       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:22:54.453871       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:22:54.453990       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:23:04.464425       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:23:04.464474       1 main.go:227] handling current node
	I0604 22:23:04.464489       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:23:04.464496       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:23:04.465138       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:23:04.465172       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [469104c1a293] <==
	E0604 22:16:52.607547       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0604 22:16:52.607787       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0604 22:16:52.607836       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0604 22:16:52.609375       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0604 22:16:52.609817       1 timeout.go:142] post-timeout activity - time-elapsed: 2.340708ms, PATCH "/api/v1/namespaces/default/events/ha-609500-m02.17d5ecfebb82d9e7" result: <nil>
	E0604 22:20:56.629608       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0604 22:20:56.629961       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0604 22:20:56.630106       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.1µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0604 22:20:56.631541       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0604 22:20:56.631760       1 timeout.go:142] post-timeout activity - time-elapsed: 2.257906ms, PATCH "/api/v1/namespaces/default/events/ha-609500-m03.17d5ed378a03c55f" result: <nil>
	E0604 22:22:09.229955       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63672: use of closed network connection
	E0604 22:22:09.858879       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63675: use of closed network connection
	E0604 22:22:11.498090       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63677: use of closed network connection
	E0604 22:22:12.186172       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63679: use of closed network connection
	E0604 22:22:12.757480       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63681: use of closed network connection
	E0604 22:22:13.343851       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63683: use of closed network connection
	E0604 22:22:13.899248       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63685: use of closed network connection
	E0604 22:22:14.477844       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63687: use of closed network connection
	E0604 22:22:15.044794       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63689: use of closed network connection
	E0604 22:22:16.052087       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63693: use of closed network connection
	E0604 22:22:26.609233       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63695: use of closed network connection
	E0604 22:22:27.190517       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63698: use of closed network connection
	E0604 22:22:37.736731       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63700: use of closed network connection
	E0604 22:22:38.272474       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63702: use of closed network connection
	E0604 22:22:48.830886       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63704: use of closed network connection
	
	
	==> kube-controller-manager [ca3f58b82ea7] <==
	I0604 22:13:00.685440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="398.902µs"
	I0604 22:13:00.752202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.951627ms"
	I0604 22:13:00.752681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.401µs"
	I0604 22:13:00.790737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.059361ms"
	I0604 22:13:00.792772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="197.201µs"
	I0604 22:16:51.794003       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-609500-m02\" does not exist"
	I0604 22:16:51.873894       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-609500-m02" podCIDRs=["10.244.1.0/24"]
	I0604 22:16:54.459192       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-609500-m02"
	I0604 22:20:55.787683       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-609500-m03\" does not exist"
	I0604 22:20:55.814501       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-609500-m03" podCIDRs=["10.244.2.0/24"]
	I0604 22:20:59.547768       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-609500-m03"
	I0604 22:22:02.524560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="191.113809ms"
	I0604 22:22:02.593745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.038946ms"
	I0604 22:22:02.899859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="305.981817ms"
	I0604 22:22:03.112227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.934456ms"
	I0604 22:22:03.256053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="143.741833ms"
	I0604 22:22:03.475243       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="219.132527ms"
	I0604 22:22:03.476334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.024908ms"
	I0604 22:22:03.634968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="158.496049ms"
	I0604 22:22:03.635552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="395.703µs"
	I0604 22:22:05.797008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.4µs"
	I0604 22:22:06.087307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.843569ms"
	I0604 22:22:06.124510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.058055ms"
	I0604 22:22:06.188052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.850827ms"
	I0604 22:22:06.188502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.4µs"
	
	
	==> kube-proxy [27ad26efaa02] <==
	I0604 22:12:45.918221       1 server_linux.go:69] "Using iptables proxy"
	I0604 22:12:45.940330       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.131.101"]
	I0604 22:12:46.015378       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 22:12:46.015511       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 22:12:46.015540       1 server_linux.go:165] "Using iptables Proxier"
	I0604 22:12:46.020469       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 22:12:46.021345       1 server.go:872] "Version info" version="v1.30.1"
	I0604 22:12:46.021557       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 22:12:46.024931       1 config.go:192] "Starting service config controller"
	I0604 22:12:46.026864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 22:12:46.026345       1 config.go:101] "Starting endpoint slice config controller"
	I0604 22:12:46.027389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 22:12:46.025989       1 config.go:319] "Starting node config controller"
	I0604 22:12:46.027755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 22:12:46.127564       1 shared_informer.go:320] Caches are synced for service config
	I0604 22:12:46.128069       1 shared_informer.go:320] Caches are synced for node config
	I0604 22:12:46.128204       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e9cca2562827] <==
	W0604 22:12:28.663002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0604 22:12:28.663041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0604 22:12:28.711141       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0604 22:12:28.711617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0604 22:12:28.797686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 22:12:28.797851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 22:12:28.870024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0604 22:12:28.870063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0604 22:12:28.930507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 22:12:28.930710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 22:12:28.937270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0604 22:12:28.937325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0604 22:12:28.964585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0604 22:12:28.967043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0604 22:12:29.012723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0604 22:12:29.013016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0604 22:12:31.125503       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0604 22:20:56.135803       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ttthj\": pod kindnet-ttthj is already assigned to node \"ha-609500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ttthj" node="ha-609500-m03"
	E0604 22:20:56.136001       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 45a0a448-dfbd-46dd-8c14-eae75989a0a2(kube-system/kindnet-ttthj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ttthj"
	E0604 22:20:56.136619       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ttthj\": pod kindnet-ttthj is already assigned to node \"ha-609500-m03\"" pod="kube-system/kindnet-ttthj"
	I0604 22:20:56.136793       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ttthj" node="ha-609500-m03"
	E0604 22:22:02.468216       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qm589\": pod busybox-fc5497c4f-qm589 is already assigned to node \"ha-609500-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qm589" node="ha-609500-m02"
	E0604 22:22:02.469195       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7da8d303-4706-4bb8-8a78-ac1973051987(default/busybox-fc5497c4f-qm589) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-qm589"
	E0604 22:22:02.469345       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qm589\": pod busybox-fc5497c4f-qm589 is already assigned to node \"ha-609500-m02\"" pod="default/busybox-fc5497c4f-qm589"
	I0604 22:22:02.469438       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qm589" node="ha-609500-m02"
	
	
	==> kubelet <==
	Jun 04 22:18:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:19:31 ha-609500 kubelet[2220]: E0604 22:19:31.214146    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:19:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:19:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:19:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:19:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:20:31 ha-609500 kubelet[2220]: E0604 22:20:31.214930    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:20:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:20:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:20:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:20:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:21:31 ha-609500 kubelet[2220]: E0604 22:21:31.211958    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:21:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:21:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:21:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:21:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:22:02 ha-609500 kubelet[2220]: I0604 22:22:02.527130    2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r68pn" podStartSLOduration=558.527106708 podStartE2EDuration="9m18.527106708s" podCreationTimestamp="2024-06-04 22:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-04 22:13:00.760899507 +0000 UTC m=+29.834071464" watchObservedRunningTime="2024-06-04 22:22:02.527106708 +0000 UTC m=+571.600278565"
	Jun 04 22:22:02 ha-609500 kubelet[2220]: I0604 22:22:02.528292    2220 topology_manager.go:215] "Topology Admit Handler" podUID="a65032b3-dd37-4bd6-b673-0d71344c7360" podNamespace="default" podName="busybox-fc5497c4f-m2dsk"
	Jun 04 22:22:02 ha-609500 kubelet[2220]: I0604 22:22:02.632516    2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8dt\" (UniqueName: \"kubernetes.io/projected/a65032b3-dd37-4bd6-b673-0d71344c7360-kube-api-access-pf8dt\") pod \"busybox-fc5497c4f-m2dsk\" (UID: \"a65032b3-dd37-4bd6-b673-0d71344c7360\") " pod="default/busybox-fc5497c4f-m2dsk"
	Jun 04 22:22:03 ha-609500 kubelet[2220]: I0604 22:22:03.963772    2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd105bf931810d5094fe400f19d7941e4062c0e8296b59dc0adb294e6d176eca"
	Jun 04 22:22:31 ha-609500 kubelet[2220]: E0604 22:22:31.220964    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:22:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:22:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:22:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:22:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:23:02.680129   10228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-609500 -n ha-609500
E0604 22:23:16.997901   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-609500 -n ha-609500: (14.4633003s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-609500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (74.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (669.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 status --output json -v=7 --alsologtostderr: (54.6735097s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500:/home/docker/cp-test.txt: (11.0149038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt": (10.846033s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500.txt: (10.8778042s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt": (10.7389917s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500_ha-609500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500_ha-609500-m02.txt: (18.6727109s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt": (10.5990894s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m02.txt": (10.5413084s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500_ha-609500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500_ha-609500-m03.txt: (18.469369s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt": (10.6811159s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m03.txt": (10.6433615s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500_ha-609500-m04.txt
E0604 22:31:45.650036   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500_ha-609500-m04.txt: (18.8127643s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test.txt": (10.8978822s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500_ha-609500-m04.txt": (10.8280344s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m02:/home/docker/cp-test.txt: (10.8520556s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt": (10.6538085s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m02.txt: (10.7052551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt": (10.7172395s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m02_ha-609500.txt
E0604 22:33:16.994214   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m02_ha-609500.txt: (18.3691039s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt": (10.3373156s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500.txt": (10.4061482s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m02_ha-609500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m02_ha-609500-m03.txt: (18.3161929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt": (10.5907788s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500-m03.txt": (10.3985525s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt: (18.1089291s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test.txt": (10.421936s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt": (10.46548s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m03:/home/docker/cp-test.txt: (10.3257158s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt": (10.3486136s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m03.txt: (10.3822283s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt": (10.3073151s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m03_ha-609500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m03_ha-609500.txt: (18.2962302s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt": (10.4711686s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500.txt": (10.4341973s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt: (18.2703104s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt"
E0604 22:36:45.655834   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt": (10.3707286s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt": (10.4583087s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt ha-609500-m04:/home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt: (18.1442884s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test.txt": (10.4156562s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt": (10.4317452s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp testdata\cp-test.txt ha-609500-m04:/home/docker/cp-test.txt: (10.4682045s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
E0604 22:38:00.224593   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt": (10.3879677s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m04.txt: (11.1254038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
E0604 22:38:16.993775   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt": (10.4180036s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m04_ha-609500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500:/home/docker/cp-test_ha-609500-m04_ha-609500.txt: (18.0011258s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt": (10.4207875s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500.txt": (10.4398851s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt: exit status 1 (11.4712102s)

                                                
                                                
** stderr ** 
	W0604 22:39:02.090784   10320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m02:/home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m02 \"sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt\"" : context deadline exceeded
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt ha-609500-m03:/home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 "sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-609500 ssh -n ha-609500-m03 \"sudo cat /home/docker/cp-test_ha-609500-m04_ha-609500-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-609500 -n ha-609500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-609500 -n ha-609500: (13.5481087s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 logs -n 25: (9.7772341s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-609500 ssh -n ha-609500-m03 sudo cat                                                                                   | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:34 UTC | 04 Jun 24 22:34 UTC |
	|         | /home/docker/cp-test_ha-609500-m02_ha-609500-m03.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m02:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:34 UTC | 04 Jun 24 22:34 UTC |
	|         | ha-609500-m04:/home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:34 UTC | 04 Jun 24 22:34 UTC |
	|         | ha-609500-m02 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n ha-609500-m04 sudo cat                                                                                   | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:34 UTC | 04 Jun 24 22:35 UTC |
	|         | /home/docker/cp-test_ha-609500-m02_ha-609500-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-609500 cp testdata\cp-test.txt                                                                                         | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:35 UTC | 04 Jun 24 22:35 UTC |
	|         | ha-609500-m03:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:35 UTC | 04 Jun 24 22:35 UTC |
	|         | ha-609500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:35 UTC | 04 Jun 24 22:35 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:35 UTC | 04 Jun 24 22:35 UTC |
	|         | ha-609500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:35 UTC | 04 Jun 24 22:36 UTC |
	|         | ha-609500:/home/docker/cp-test_ha-609500-m03_ha-609500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:36 UTC | 04 Jun 24 22:36 UTC |
	|         | ha-609500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n ha-609500 sudo cat                                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:36 UTC | 04 Jun 24 22:36 UTC |
	|         | /home/docker/cp-test_ha-609500-m03_ha-609500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:36 UTC | 04 Jun 24 22:36 UTC |
	|         | ha-609500-m02:/home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:36 UTC | 04 Jun 24 22:36 UTC |
	|         | ha-609500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n ha-609500-m02 sudo cat                                                                                   | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:36 UTC | 04 Jun 24 22:37 UTC |
	|         | /home/docker/cp-test_ha-609500-m03_ha-609500-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m03:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:37 UTC | 04 Jun 24 22:37 UTC |
	|         | ha-609500-m04:/home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:37 UTC | 04 Jun 24 22:37 UTC |
	|         | ha-609500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n ha-609500-m04 sudo cat                                                                                   | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:37 UTC | 04 Jun 24 22:37 UTC |
	|         | /home/docker/cp-test_ha-609500-m03_ha-609500-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-609500 cp testdata\cp-test.txt                                                                                         | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:37 UTC | 04 Jun 24 22:37 UTC |
	|         | ha-609500-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:37 UTC | 04 Jun 24 22:38 UTC |
	|         | ha-609500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:38 UTC | 04 Jun 24 22:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2803397463\001\cp-test_ha-609500-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:38 UTC | 04 Jun 24 22:38 UTC |
	|         | ha-609500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:38 UTC | 04 Jun 24 22:38 UTC |
	|         | ha-609500:/home/docker/cp-test_ha-609500-m04_ha-609500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n                                                                                                          | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:38 UTC | 04 Jun 24 22:38 UTC |
	|         | ha-609500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-609500 ssh -n ha-609500 sudo cat                                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:38 UTC | 04 Jun 24 22:39 UTC |
	|         | /home/docker/cp-test_ha-609500-m04_ha-609500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-609500 cp ha-609500-m04:/home/docker/cp-test.txt                                                                       | ha-609500 | minikube6\jenkins | v1.33.1 | 04 Jun 24 22:39 UTC |                     |
	|         | ha-609500-m02:/home/docker/cp-test_ha-609500-m04_ha-609500-m02.txt                                                        |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 22:09:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 22:09:13.628693    4212 out.go:291] Setting OutFile to fd 1068 ...
	I0604 22:09:13.629002    4212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:09:13.629002    4212 out.go:304] Setting ErrFile to fd 884...
	I0604 22:09:13.629002    4212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:09:13.654995    4212 out.go:298] Setting JSON to false
	I0604 22:09:13.659956    4212 start.go:129] hostinfo: {"hostname":"minikube6","uptime":86203,"bootTime":1717452750,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 22:09:13.659956    4212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 22:09:13.664444    4212 out.go:177] * [ha-609500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 22:09:13.671226    4212 notify.go:220] Checking for updates...
	I0604 22:09:13.673771    4212 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:09:13.676363    4212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 22:09:13.678936    4212 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 22:09:13.681495    4212 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 22:09:13.684035    4212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 22:09:13.686148    4212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 22:09:19.404594    4212 out.go:177] * Using the hyperv driver based on user configuration
	I0604 22:09:19.408723    4212 start.go:297] selected driver: hyperv
	I0604 22:09:19.408723    4212 start.go:901] validating driver "hyperv" against <nil>
	I0604 22:09:19.408723    4212 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 22:09:19.462869    4212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 22:09:19.463609    4212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:09:19.463609    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:09:19.463609    4212 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0604 22:09:19.463609    4212 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 22:09:19.463609    4212 start.go:340] cluster config:
	{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:09:19.464866    4212 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 22:09:19.469379    4212 out.go:177] * Starting "ha-609500" primary control-plane node in "ha-609500" cluster
	I0604 22:09:19.471564    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:09:19.471564    4212 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 22:09:19.471564    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:09:19.471564    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:09:19.471564    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:09:19.474946    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:09:19.474946    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json: {Name:mkc3bcc5a7016d2cd3c4b8a4fd482a3f874b5e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:09:19.475850    4212 start.go:360] acquireMachinesLock for ha-609500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:09:19.475850    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500"
	I0604 22:09:19.477320    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:09:19.477320    4212 start.go:125] createHost starting for "" (driver="hyperv")
	I0604 22:09:19.477658    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:09:19.482497    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:09:19.482497    4212 client.go:168] LocalClient.Create starting
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:09:19.482649    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:09:19.483683    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:09:19.483714    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:09:19.483880    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:09:21.695092    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:09:21.695092    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:21.705639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:09:23.524881    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:09:23.534211    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:23.534350    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:09:25.161177    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:09:25.161177    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:25.161561    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:09:28.900813    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:09:28.915352    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:28.918065    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:09:29.464073    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:09:29.777217    4212 main.go:141] libmachine: Creating VM...
	I0604 22:09:29.777639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:09:32.778698    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:09:32.791354    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:32.791497    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:09:32.791618    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:09:34.638193    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:09:34.638193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:34.638193    4212 main.go:141] libmachine: Creating VHD
	I0604 22:09:34.638293    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:09:38.576164    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 89A539C1-C84F-40F1-9263-948AF4BDDF8B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:09:38.576164    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:38.587724    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:09:38.587724    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:09:38.600307    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:09:41.888162    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:41.900236    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:41.900236    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd' -SizeBytes 20000MB
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:44.606791    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:09:48.451972    4212 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-609500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:09:48.452170    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:48.452170    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500 -DynamicMemoryEnabled $false
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:50.842484    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500 -Count 2
	I0604 22:09:53.191068    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:53.191283    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:53.191283    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\boot2docker.iso'
	I0604 22:09:55.933930    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:55.934151    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:55.934151    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\disk.vhd'
	I0604 22:09:58.752394    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:09:58.752394    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:09:58.752394    4212 main.go:141] libmachine: Starting VM...
	I0604 22:09:58.752737    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500
	I0604 22:10:01.953951    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:01.953951    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:01.953951    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:10:01.955958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:04.389904    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:04.389904    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:04.392421    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:07.025192    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:07.032849    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:08.043187    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:10.378581    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:10.384712    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:10.384712    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:13.082562    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:13.082562    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:14.090570    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:16.395237    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:16.400903    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:16.400996    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:19.080360    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:19.080437    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:20.093055    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:22.425895    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:22.438241    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:22.438241    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:25.075139    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:10:25.075301    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:26.090545    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:28.449958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:31.176260    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:31.176260    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:31.189309    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:33.446338    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:33.458818    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:33.458905    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:10:33.458905    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:35.749407    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:35.762108    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:35.762108    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:38.470367    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:38.470367    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:38.489285    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:38.505466    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:38.505466    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:10:38.636933    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:10:38.636933    4212 buildroot.go:166] provisioning hostname "ha-609500"
	I0604 22:10:38.636933    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:40.883315    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:40.883508    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:40.883607    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:43.544680    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:43.544680    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:43.551335    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:43.551660    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:43.551660    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500 && echo "ha-609500" | sudo tee /etc/hostname
	I0604 22:10:43.717271    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500
	
	I0604 22:10:43.717413    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:45.974635    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:45.974829    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:45.974829    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:48.700837    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:48.700928    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:48.707608    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:10:48.708115    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:10:48.708224    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:10:48.856465    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:10:48.856465    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:10:48.857002    4212 buildroot.go:174] setting up certificates
	I0604 22:10:48.857002    4212 provision.go:84] configureAuth start
	I0604 22:10:48.857085    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:51.122268    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:51.122335    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:51.122335    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:53.842112    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:53.854634    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:53.854725    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:10:56.102411    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:10:56.102411    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:56.114276    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:10:58.794096    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:10:58.805305    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:10:58.805476    4212 provision.go:143] copyHostCerts
	I0604 22:10:58.805602    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:10:58.805602    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:10:58.805602    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:10:58.806458    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:10:58.807323    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:10:58.807323    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:10:58.807323    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:10:58.808011    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:10:58.809050    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:10:58.809219    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:10:58.809219    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:10:58.809219    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:10:58.810427    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500 san=[127.0.0.1 172.20.131.101 ha-609500 localhost minikube]
	I0604 22:10:59.098187    4212 provision.go:177] copyRemoteCerts
	I0604 22:10:59.114184    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:10:59.114184    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:01.346160    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:01.346160    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:01.359602    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:04.065314    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:04.066033    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:04.066033    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:04.177718    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0634487s)
	I0604 22:11:04.177756    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:11:04.177756    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 22:11:04.240542    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:11:04.241195    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:11:04.289231    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:11:04.289666    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0604 22:11:04.344992    4212 provision.go:87] duration metric: took 15.487718s to configureAuth
	I0604 22:11:04.344992    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:11:04.344992    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:11:04.345682    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:06.576728    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:06.576897    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:06.576897    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:09.234198    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:09.246134    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:09.250815    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:09.251455    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:09.251455    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:11:09.382003    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:11:09.382003    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:11:09.382003    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:11:09.382598    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:11.621981    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:11.634868    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:11.634868    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:14.385008    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:14.397190    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:14.402496    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:14.403412    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:14.403412    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:11:14.564582    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:11:14.564582    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:16.836580    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:16.836580    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:16.844418    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:19.536830    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:19.549576    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:19.555869    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:19.555869    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:19.556405    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:11:21.788402    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:11:21.788402    4212 machine.go:97] duration metric: took 48.3291099s to provisionDockerMachine
	I0604 22:11:21.788402    4212 client.go:171] duration metric: took 2m2.3049355s to LocalClient.Create
	I0604 22:11:21.788402    4212 start.go:167] duration metric: took 2m2.3049672s to libmachine.API.Create "ha-609500"
	I0604 22:11:21.788402    4212 start.go:293] postStartSetup for "ha-609500" (driver="hyperv")
	I0604 22:11:21.788402    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:11:21.802521    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:11:21.802521    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:24.093649    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:24.096991    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:24.096991    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:26.827074    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:26.827074    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:26.829488    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:26.938968    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1364053s)
	I0604 22:11:26.950972    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:11:26.960331    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:11:26.960331    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:11:26.961029    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:11:26.962234    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:11:26.962234    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:11:26.975614    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:11:26.994900    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:11:27.055995    4212 start.go:296] duration metric: took 5.2674685s for postStartSetup
	I0604 22:11:27.058600    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:29.317103    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:29.329697    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:29.330044    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:32.044132    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:32.044132    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:32.044132    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:11:32.060697    4212 start.go:128] duration metric: took 2m12.5812428s to createHost
	I0604 22:11:32.060823    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:34.306423    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:36.973663    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:36.973663    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:36.992635    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:36.992635    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:36.992635    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:11:37.125467    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539097.132815399
	
	I0604 22:11:37.125467    4212 fix.go:216] guest clock: 1717539097.132815399
	I0604 22:11:37.126001    4212 fix.go:229] Guest: 2024-06-04 22:11:37.132815399 +0000 UTC Remote: 2024-06-04 22:11:32.0608233 +0000 UTC m=+138.605005501 (delta=5.071992099s)
	I0604 22:11:37.126001    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:39.336473    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:41.994201    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:41.994201    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:42.000780    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:11:42.000935    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.131.101 22 <nil> <nil>}
	I0604 22:11:42.000935    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539097
	I0604 22:11:42.144767    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:11:37 UTC 2024
	
	I0604 22:11:42.144767    4212 fix.go:236] clock set: Tue Jun  4 22:11:37 UTC 2024
	 (err=<nil>)
	I0604 22:11:42.144767    4212 start.go:83] releasing machines lock for "ha-609500", held for 2m22.6677822s
	I0604 22:11:42.144767    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:44.386347    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:44.400710    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:44.400710    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:47.098251    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:47.098428    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:47.102159    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:11:47.102159    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:47.118618    4212 ssh_runner.go:195] Run: cat /version.json
	I0604 22:11:47.118618    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:11:49.395623    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:49.395823    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:49.395823    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:49.404017    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:11:52.144068    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:52.156605    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:52.156730    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:52.182765    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:11:52.182765    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:11:52.183349    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:11:52.247337    4212 ssh_runner.go:235] Completed: cat /version.json: (5.1286771s)
	I0604 22:11:52.260959    4212 ssh_runner.go:195] Run: systemctl --version
	I0604 22:11:52.328685    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.226484s)
	I0604 22:11:52.340641    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0604 22:11:52.351598    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:11:52.363137    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:11:52.396484    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:11:52.396573    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:11:52.396573    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:11:52.445458    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:11:52.483257    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:11:52.503276    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:11:52.514143    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:11:52.552096    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:11:52.595083    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:11:52.631029    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:11:52.663789    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:11:52.700208    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:11:52.734628    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:11:52.770366    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:11:52.803140    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:11:52.837164    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:11:52.868958    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:53.093971    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:11:53.134498    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:11:53.146791    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:11:53.188485    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:11:53.225944    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:11:53.277312    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:11:53.316314    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:11:53.357308    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:11:53.423499    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:11:53.451404    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:11:53.499972    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:11:53.519118    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:11:53.539973    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:11:53.592956    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:11:53.808329    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:11:54.018198    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:11:54.018530    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:11:54.064278    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:54.280051    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:11:56.839244    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5591719s)
	I0604 22:11:56.849454    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:11:56.890756    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:11:56.929830    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:11:57.133563    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:11:57.333219    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:57.541120    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:11:57.589943    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:11:57.631087    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:11:57.837317    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:11:57.956121    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:11:57.970992    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:11:57.983114    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:11:57.994408    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:11:58.013762    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:11:58.074595    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:11:58.085076    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:11:58.129196    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:11:58.166609    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:11:58.166609    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:11:58.171470    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:11:58.174385    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:11:58.174385    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:11:58.188520    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:11:58.195861    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:11:58.235626    4212 kubeadm.go:877] updating cluster {Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 22:11:58.235626    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:11:58.240909    4212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 22:11:58.273092    4212 docker.go:685] Got preloaded images: 
	I0604 22:11:58.273092    4212 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0604 22:11:58.286408    4212 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 22:11:58.321782    4212 ssh_runner.go:195] Run: which lz4
	I0604 22:11:58.327365    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0604 22:11:58.340099    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 22:11:58.349743    4212 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 22:11:58.349950    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0604 22:12:00.288171    4212 docker.go:649] duration metric: took 1.9583784s to copy over tarball
	I0604 22:12:00.299872    4212 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 22:12:08.779294    4212 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4776668s)
	I0604 22:12:08.793962    4212 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 22:12:08.866528    4212 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 22:12:08.888348    4212 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0604 22:12:08.935856    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:12:09.152358    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:12:12.183325    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0309419s)
	I0604 22:12:12.193567    4212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 22:12:12.218652    4212 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 22:12:12.218652    4212 cache_images.go:84] Images are preloaded, skipping loading
	I0604 22:12:12.218652    4212 kubeadm.go:928] updating node { 172.20.131.101 8443 v1.30.1 docker true true} ...
	I0604 22:12:12.219189    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.131.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:12:12.228353    4212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 22:12:12.264775    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:12:12.264775    4212 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 22:12:12.264775    4212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 22:12:12.264917    4212 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.131.101 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-609500 NodeName:ha-609500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.131.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.131.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 22:12:12.265496    4212 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.131.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-609500"
	  kubeletExtraArgs:
	    node-ip: 172.20.131.101
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.131.101"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 22:12:12.265625    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:12:12.278633    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:12:12.306365    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:12:12.307310    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:12:12.319473    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:12:12.336471    4212 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 22:12:12.349039    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0604 22:12:12.368317    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0604 22:12:12.402617    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:12:12.435954    4212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0604 22:12:12.468305    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0604 22:12:12.512058    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:12:12.514862    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:12:12.550878    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:12:12.749703    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:12:12.779121    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.131.101
	I0604 22:12:12.779431    4212 certs.go:194] generating shared ca certs ...
	I0604 22:12:12.779493    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:12.780401    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:12:12.780739    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:12:12.780923    4212 certs.go:256] generating profile certs ...
	I0604 22:12:12.781740    4212 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:12:12.781909    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt with IP's: []
	I0604 22:12:13.030544    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt ...
	I0604 22:12:13.030544    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.crt: {Name:mk76295a403c9aeb3abfbf53fa2b5074ca3f3840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.036636    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key ...
	I0604 22:12:13.036636    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key: {Name:mke61efda45fff399bb2b7780b981438fd466b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.037858    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f
	I0604 22:12:13.038950    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.143.254]
	I0604 22:12:13.310698    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f ...
	I0604 22:12:13.310698    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f: {Name:mk09bea6f2657d7aad3850bfc0259de68b634b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.316201    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f ...
	I0604 22:12:13.316201    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f: {Name:mk609125828db1d8a4dca93b261182711db39ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.316889    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.cf55e88f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:12:13.339963    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.cf55e88f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:12:13.341224    4212 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:12:13.341859    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt with IP's: []
	I0604 22:12:13.500513    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt ...
	I0604 22:12:13.500513    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt: {Name:mk345d8640268e77e7bedddb09b0d06028d9e079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.502089    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key ...
	I0604 22:12:13.502089    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key: {Name:mk1562ab6c3b41bbfe12183bd74dcf651200f9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:13.503979    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:12:13.504414    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:12:13.504651    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:12:13.504788    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:12:13.510823    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:12:13.518403    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:12:13.519063    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:12:13.519063    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:12:13.519411    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:12:13.519669    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:12:13.519915    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:12:13.520167    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:12:13.520167    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:12:13.521640    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:12:13.573390    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:12:13.629212    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:12:13.682187    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:12:13.732428    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0604 22:12:13.778800    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:12:13.835734    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:12:13.885900    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:12:13.935256    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:12:13.986239    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:12:14.033867    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:12:14.091738    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 22:12:14.139066    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:12:14.163378    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:12:14.197003    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.206778    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.219282    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:12:14.242468    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:12:14.276739    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:12:14.312375    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.325881    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.339953    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:12:14.360666    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:12:14.396566    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:12:14.433330    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.443014    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.455553    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:12:14.480174    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:12:14.513334    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:12:14.520255    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:12:14.520838    4212 kubeadm.go:391] StartCluster: {Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:12:14.531475    4212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 22:12:14.574684    4212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 22:12:14.611675    4212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 22:12:14.646694    4212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 22:12:14.667660    4212 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 22:12:14.667718    4212 kubeadm.go:156] found existing configuration files:
	
	I0604 22:12:14.679059    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 22:12:14.698727    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 22:12:14.712319    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 22:12:14.746741    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 22:12:14.768900    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 22:12:14.785345    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 22:12:14.817601    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 22:12:14.836810    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 22:12:14.850114    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 22:12:14.880024    4212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 22:12:14.897317    4212 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 22:12:14.910125    4212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 22:12:14.928321    4212 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 22:12:15.395513    4212 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 22:12:31.600002    4212 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 22:12:31.600124    4212 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 22:12:31.600332    4212 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 22:12:31.600612    4212 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 22:12:31.600781    4212 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 22:12:31.600781    4212 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 22:12:31.603627    4212 out.go:204]   - Generating certificates and keys ...
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 22:12:31.603668    4212 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 22:12:31.604267    4212 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-609500 localhost] and IPs [172.20.131.101 127.0.0.1 ::1]
	I0604 22:12:31.604306    4212 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 22:12:31.604887    4212 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-609500 localhost] and IPs [172.20.131.101 127.0.0.1 ::1]
	I0604 22:12:31.605032    4212 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 22:12:31.605143    4212 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 22:12:31.605786    4212 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 22:12:31.605936    4212 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 22:12:31.606053    4212 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 22:12:31.606155    4212 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 22:12:31.606234    4212 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 22:12:31.606234    4212 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 22:12:31.606234    4212 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 22:12:31.611367    4212 out.go:204]   - Booting up control plane ...
	I0604 22:12:31.611541    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 22:12:31.611623    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 22:12:31.611750    4212 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 22:12:31.612027    4212 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 22:12:31.612632    4212 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 22:12:31.612632    4212 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 22:12:31.612632    4212 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.004475988s
	I0604 22:12:31.613206    4212 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 22:12:31.613385    4212 kubeadm.go:309] [api-check] The API server is healthy after 9.002398182s
	I0604 22:12:31.613385    4212 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 22:12:31.613385    4212 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 22:12:31.613922    4212 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 22:12:31.614269    4212 kubeadm.go:309] [mark-control-plane] Marking the node ha-609500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 22:12:31.614269    4212 kubeadm.go:309] [bootstrap-token] Using token: 1j4sj8.yfunpww2vrg63q4l
	I0604 22:12:31.618451    4212 out.go:204]   - Configuring RBAC rules ...
	I0604 22:12:31.618451    4212 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 22:12:31.619597    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 22:12:31.619597    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 22:12:31.620140    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 22:12:31.620367    4212 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 22:12:31.620584    4212 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 22:12:31.620815    4212 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 22:12:31.621037    4212 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 22:12:31.621037    4212 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 22:12:31.621037    4212 kubeadm.go:309] 
	I0604 22:12:31.621310    4212 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 22:12:31.621310    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 22:12:31.621446    4212 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 22:12:31.621446    4212 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 22:12:31.621446    4212 kubeadm.go:309] 
	I0604 22:12:31.621446    4212 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 22:12:31.621446    4212 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 22:12:31.622753    4212 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 22:12:31.622753    4212 kubeadm.go:309] 
	I0604 22:12:31.622753    4212 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 22:12:31.622753    4212 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 22:12:31.622753    4212 kubeadm.go:309] 
	I0604 22:12:31.623339    4212 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1j4sj8.yfunpww2vrg63q4l \
	I0604 22:12:31.623339    4212 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 22:12:31.623339    4212 kubeadm.go:309] 	--control-plane 
	I0604 22:12:31.623339    4212 kubeadm.go:309] 
	I0604 22:12:31.623339    4212 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 22:12:31.623339    4212 kubeadm.go:309] 
	I0604 22:12:31.624034    4212 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1j4sj8.yfunpww2vrg63q4l \
	I0604 22:12:31.624227    4212 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 22:12:31.624227    4212 cni.go:84] Creating CNI manager for ""
	I0604 22:12:31.624227    4212 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 22:12:31.625439    4212 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0604 22:12:31.634469    4212 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0604 22:12:31.653088    4212 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0604 22:12:31.653088    4212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0604 22:12:31.707586    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0604 22:12:32.492793    4212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 22:12:32.507981    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500 minikube.k8s.io/updated_at=2024_06_04T22_12_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=true
	I0604 22:12:32.507981    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:32.533228    4212 ops.go:34] apiserver oom_adj: -16
	I0604 22:12:32.758890    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:33.267466    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:33.759564    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:34.270022    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:34.789017    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:35.277393    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:35.765950    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:36.273843    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:36.767726    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:37.273170    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:37.769398    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:38.274584    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:38.771162    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:39.269284    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:39.762128    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:40.271995    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:40.760625    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:41.266860    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:41.766893    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:42.272816    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:42.775274    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:43.261041    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:43.776255    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.271743    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.773991    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 22:12:44.917568    4212 kubeadm.go:1107] duration metric: took 12.4246731s to wait for elevateKubeSystemPrivileges
	W0604 22:12:44.917568    4212 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 22:12:44.917568    4212 kubeadm.go:393] duration metric: took 30.3964828s to StartCluster
	I0604 22:12:44.917568    4212 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:44.917568    4212 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:12:44.919316    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:12:44.921131    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 22:12:44.921227    4212 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:12:44.921227    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:12:44.921285    4212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0604 22:12:44.921419    4212 addons.go:69] Setting storage-provisioner=true in profile "ha-609500"
	I0604 22:12:44.921419    4212 addons.go:69] Setting default-storageclass=true in profile "ha-609500"
	I0604 22:12:44.921419    4212 addons.go:234] Setting addon storage-provisioner=true in "ha-609500"
	I0604 22:12:44.921680    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:12:44.921750    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:12:44.921481    4212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-609500"
	I0604 22:12:44.922613    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:44.922613    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:45.092000    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 22:12:45.515336    4212 start.go:946] {"host.minikube.internal": 172.20.128.1} host record injected into CoreDNS's ConfigMap
	I0604 22:12:47.331262    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:47.331262    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:47.332063    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:12:47.333646    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 22:12:47.335219    4212 cert_rotation.go:137] Starting client certificate rotation controller
	I0604 22:12:47.335484    4212 addons.go:234] Setting addon default-storageclass=true in "ha-609500"
	I0604 22:12:47.335484    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:12:47.336306    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:47.347619    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:47.347619    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:47.352777    4212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 22:12:47.355094    4212 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 22:12:47.355094    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 22:12:47.355094    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:49.747419    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:49.747419    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:49.757056    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:12:49.837650    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:49.837874    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:49.837963    4212 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 22:12:49.837963    4212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 22:12:49.838028    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:52.235114    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:12:52.661631    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:12:52.661631    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:52.662506    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:12:52.830537    4212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 22:12:55.022069    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:12:55.022069    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:55.022069    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:12:55.163616    4212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 22:12:55.323129    4212 round_trippers.go:463] GET https://172.20.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0604 22:12:55.323129    4212 round_trippers.go:469] Request Headers:
	I0604 22:12:55.323129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:12:55.323129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:12:55.337375    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:12:55.338254    4212 round_trippers.go:463] PUT https://172.20.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0604 22:12:55.338254    4212 round_trippers.go:469] Request Headers:
	I0604 22:12:55.338344    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:12:55.338344    4212 round_trippers.go:473]     Content-Type: application/json
	I0604 22:12:55.338344    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:12:55.345690    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:12:55.354049    4212 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0604 22:12:55.356724    4212 addons.go:510] duration metric: took 10.4353533s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0604 22:12:55.356724    4212 start.go:245] waiting for cluster config update ...
	I0604 22:12:55.356724    4212 start.go:254] writing updated cluster config ...
	I0604 22:12:55.360090    4212 out.go:177] 
	I0604 22:12:55.373276    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:12:55.373276    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:12:55.379958    4212 out.go:177] * Starting "ha-609500-m02" control-plane node in "ha-609500" cluster
	I0604 22:12:55.384338    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:12:55.384338    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:12:55.384338    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:12:55.385017    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:12:55.385104    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:12:55.390174    4212 start.go:360] acquireMachinesLock for ha-609500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:12:55.390174    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500-m02"
	I0604 22:12:55.390750    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:12:55.390750    4212 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0604 22:12:55.393292    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:12:55.394145    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:12:55.394145    4212 client.go:168] LocalClient.Create starting
	I0604 22:12:55.394145    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:12:55.394886    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:12:55.394923    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:12:55.395150    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:12:55.395329    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:12:55.395329    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:12:55.395482    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:12:57.415389    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:12:57.415389    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:57.426539    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:12:59.309789    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:13:00.892986    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:13:00.892986    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:00.902643    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:13:04.868770    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:13:04.869580    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:04.872161    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:13:05.404387    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:13:05.942383    4212 main.go:141] libmachine: Creating VM...
	I0604 22:13:05.942383    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:09.029343    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:13:09.029343    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:13:10.864221    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:13:10.864457    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:10.864457    4212 main.go:141] libmachine: Creating VHD
	I0604 22:13:10.864612    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:13:15.035846    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F212B8AB-F9CA-4C49-9ABF-10BCC6A6423A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:13:15.035846    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:15.035846    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:13:15.035846    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:13:15.050524    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:13:18.469043    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:18.469043    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:18.469233    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd' -SizeBytes 20000MB
	I0604 22:13:21.292465    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:21.292465    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:21.292548    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-609500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:25.461726    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500-m02 -DynamicMemoryEnabled $false
	I0604 22:13:28.030036    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:28.030036    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:28.030199    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500-m02 -Count 2
	I0604 22:13:30.524025    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:30.524025    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:30.524431    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\boot2docker.iso'
	I0604 22:13:33.450121    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:33.450121    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:33.450678    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\disk.vhd'
	I0604 22:13:36.463800    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:36.463800    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:36.463800    4212 main.go:141] libmachine: Starting VM...
	I0604 22:13:36.464119    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500-m02
	I0604 22:13:39.890126    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:39.890126    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:39.890126    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:13:39.891070    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:42.409572    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:45.275574    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:45.275574    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:46.278990    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:48.824347    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:48.824347    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:48.825052    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:51.727524    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:51.727524    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:52.737192    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:13:55.200193    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:13:55.200193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:55.201174    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:13:58.086538    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:13:58.086606    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:13:59.099925    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:01.589274    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:01.590269    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:01.590269    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:04.458502    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:14:04.458502    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:05.464476    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:07.964749    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:07.964749    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:07.964939    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:10.832001    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:14:10.833015    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:11.839434    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:14.297307    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:14.297963    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:14.297963    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:17.202167    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:17.202167    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:17.202777    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:19.626552    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:19.626552    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:19.626850    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:14:19.626850    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:22.134485    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:24.999627    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:24.999691    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:25.006428    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:25.006428    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:25.006428    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:14:25.151596    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:14:25.151705    4212 buildroot.go:166] provisioning hostname "ha-609500-m02"
	I0604 22:14:25.151759    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:27.541150    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:27.541342    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:27.541342    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:30.432498    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:30.432498    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:30.438033    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:30.438420    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:30.438593    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500-m02 && echo "ha-609500-m02" | sudo tee /etc/hostname
	I0604 22:14:30.617297    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500-m02
	
	I0604 22:14:30.617343    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:33.034710    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:33.034792    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:33.034874    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:35.932833    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:35.932833    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:35.939110    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:35.939884    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:35.939884    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:14:36.100103    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:14:36.100297    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:14:36.100297    4212 buildroot.go:174] setting up certificates
	I0604 22:14:36.100297    4212 provision.go:84] configureAuth start
	I0604 22:14:36.100395    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:38.503070    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:41.360099    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:41.360490    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:41.360557    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:43.841165    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:43.841631    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:43.841631    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:46.760256    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:46.760256    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:46.760256    4212 provision.go:143] copyHostCerts
	I0604 22:14:46.760256    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:14:46.760256    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:14:46.760256    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:14:46.762224    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:14:46.763339    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:14:46.763723    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:14:46.763723    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:14:46.764039    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:14:46.764337    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:14:46.765206    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:14:46.765206    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:14:46.765574    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:14:46.766864    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500-m02 san=[127.0.0.1 172.20.128.86 ha-609500-m02 localhost minikube]
	I0604 22:14:46.987872    4212 provision.go:177] copyRemoteCerts
	I0604 22:14:47.000211    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:14:47.000211    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:49.404969    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:49.404969    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:49.405229    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:52.350548    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:52.350548    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:52.351607    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:14:52.466654    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.466398s)
	I0604 22:14:52.466772    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:14:52.467329    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:14:52.523441    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:14:52.524097    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 22:14:52.581684    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:14:52.582153    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 22:14:52.640869    4212 provision.go:87] duration metric: took 16.5404365s to configureAuth
	I0604 22:14:52.640869    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:14:52.642063    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:14:52.642145    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:14:55.062634    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:14:55.062634    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:55.063023    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:14:57.956978    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:14:57.957743    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:14:57.964441    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:14:57.964575    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:14:57.964575    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:14:58.106314    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:14:58.106314    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:14:58.106568    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:14:58.106669    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:00.507901    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:00.507934    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:00.508140    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:03.387107    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:03.387107    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:03.393194    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:03.394191    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:03.394191    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.131.101"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:15:03.567230    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.131.101
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:15:03.567230    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:05.969823    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:05.970665    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:05.970665    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:08.860711    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:08.861671    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:08.868155    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:08.868155    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:08.868747    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:15:11.154072    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:15:11.154252    4212 machine.go:97] duration metric: took 51.526929s to provisionDockerMachine
	I0604 22:15:11.154252    4212 client.go:171] duration metric: took 2m15.7590028s to LocalClient.Create
	I0604 22:15:11.154347    4212 start.go:167] duration metric: took 2m15.7590028s to libmachine.API.Create "ha-609500"
	I0604 22:15:11.154347    4212 start.go:293] postStartSetup for "ha-609500-m02" (driver="hyperv")
	I0604 22:15:11.154402    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:15:11.171040    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:15:11.171040    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:13.587605    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:13.588185    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:13.588185    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:16.475759    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:16.475821    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:16.475821    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:16.589130    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.418045s)
	I0604 22:15:16.603270    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:15:16.613507    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:15:16.613507    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:15:16.613507    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:15:16.614863    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:15:16.614863    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:15:16.628498    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:15:16.655005    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:15:16.713603    4212 start.go:296] duration metric: took 5.5592106s for postStartSetup
	I0604 22:15:16.716731    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:19.119533    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:19.119710    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:19.119710    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:22.070373    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:22.070373    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:22.071456    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:15:22.077994    4212 start.go:128] duration metric: took 2m26.6860506s to createHost
	I0604 22:15:22.077994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:24.507290    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:24.507290    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:24.507491    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:27.403271    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:27.404042    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:27.409585    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:27.410233    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:27.410233    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:15:27.556979    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539327.558099855
	
	I0604 22:15:27.557073    4212 fix.go:216] guest clock: 1717539327.558099855
	I0604 22:15:27.557073    4212 fix.go:229] Guest: 2024-06-04 22:15:27.558099855 +0000 UTC Remote: 2024-06-04 22:15:22.0779942 +0000 UTC m=+368.620306501 (delta=5.480105655s)
	I0604 22:15:27.557073    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:30.002193    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:32.897972    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:32.897972    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:32.903144    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:15:32.903144    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.86 22 <nil> <nil>}
	I0604 22:15:32.903144    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539327
	I0604 22:15:33.070392    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:15:27 UTC 2024
	
	I0604 22:15:33.070392    4212 fix.go:236] clock set: Tue Jun  4 22:15:27 UTC 2024
	 (err=<nil>)
	I0604 22:15:33.070392    4212 start.go:83] releasing machines lock for "ha-609500-m02", held for 2m37.6783589s
	I0604 22:15:33.070392    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:35.486308    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:35.486541    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:35.486541    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:38.344967    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:38.345137    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:38.347715    4212 out.go:177] * Found network options:
	I0604 22:15:38.351892    4212 out.go:177]   - NO_PROXY=172.20.131.101
	W0604 22:15:38.355013    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:15:38.357598    4212 out.go:177]   - NO_PROXY=172.20.131.101
	W0604 22:15:38.363337    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:15:38.365105    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:15:38.369071    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:15:38.369229    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:38.378811    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 22:15:38.378811    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m02 ).state
	I0604 22:15:40.869798    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:40.869798    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:40.870630    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:40.871077    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:40.871626    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:40.871704    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:43.799991    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:43.800668    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:43.800736    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:43.826729    4212 main.go:141] libmachine: [stdout =====>] : 172.20.128.86
	
	I0604 22:15:43.826729    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:43.826729    4212 sshutil.go:53] new ssh client: &{IP:172.20.128.86 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m02\id_rsa Username:docker}
	I0604 22:15:44.005530    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6364131s)
	I0604 22:15:44.005530    4212 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.6266735s)
	W0604 22:15:44.005530    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:15:44.019532    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:15:44.051153    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:15:44.051297    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:15:44.051559    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:15:44.107587    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:15:44.147359    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:15:44.171846    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:15:44.186157    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:15:44.228452    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:15:44.266702    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:15:44.305645    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:15:44.344196    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:15:44.382847    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:15:44.418519    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:15:44.458098    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:15:44.500906    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:15:44.541186    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:15:44.583574    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:44.822323    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:15:44.858867    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:15:44.873173    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:15:44.917334    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:15:44.959777    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:15:45.010917    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:15:45.059638    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:15:45.106765    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:15:45.179152    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:15:45.211428    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:15:45.276394    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:15:45.296682    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:15:45.318004    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:15:45.369344    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:15:45.600327    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:15:45.835045    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:15:45.835110    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:15:45.890466    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:46.138804    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:15:48.745112    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6062866s)
	I0604 22:15:48.760094    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:15:48.807460    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:15:48.851484    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:15:49.096781    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:15:49.353120    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:49.608884    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:15:49.657861    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:15:49.702760    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:15:49.935380    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:15:50.061262    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:15:50.074020    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:15:50.085731    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:15:50.100702    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:15:50.124785    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:15:50.197531    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:15:50.207987    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:15:50.260653    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:15:50.303576    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:15:50.307808    4212 out.go:177]   - env NO_PROXY=172.20.131.101
	I0604 22:15:50.309870    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:15:50.315811    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:15:50.318800    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:15:50.318800    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:15:50.332464    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:15:50.341446    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:15:50.369374    4212 mustload.go:65] Loading cluster: ha-609500
	I0604 22:15:50.369374    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:15:50.370922    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:15:52.798879    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:52.799713    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:52.799713    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:15:52.800027    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.128.86
	I0604 22:15:52.800027    4212 certs.go:194] generating shared ca certs ...
	I0604 22:15:52.800027    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:52.800749    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:15:52.801585    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:15:52.801585    4212 certs.go:256] generating profile certs ...
	I0604 22:15:52.802310    4212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:15:52.802310    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043
	I0604 22:15:52.802310    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.128.86 172.20.143.254]
	I0604 22:15:53.300810    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 ...
	I0604 22:15:53.301810    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043: {Name:mkebd533e18fdc3cf055acbe62a648019b0cef31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:53.302124    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043 ...
	I0604 22:15:53.302124    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043: {Name:mk77abb44ef0f71fd51608e6bb570d80041136e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:15:53.303134    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.d1566043 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:15:53.317401    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.d1566043 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:15:53.317885    4212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:15:53.317885    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:15:53.318991    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:15:53.319134    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:15:53.319344    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:15:53.319549    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:15:53.319760    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:15:53.319760    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:15:53.319994    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:15:53.320237    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:15:53.320883    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:15:53.320883    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:15:53.321169    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:15:53.321169    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:15:53.321737    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:15:53.322027    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:15:53.322027    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:15:53.322705    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:15:53.322705    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:15:53.323039    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:55.768953    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:15:58.693294    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:15:58.693294    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:15:58.693664    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:15:58.789834    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0604 22:15:58.800053    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0604 22:15:58.836664    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0604 22:15:58.847012    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0604 22:15:58.887173    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0604 22:15:58.896501    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0604 22:15:58.938621    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0604 22:15:58.947640    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0604 22:15:58.990235    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0604 22:15:58.999379    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0604 22:15:59.040281    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0604 22:15:59.046983    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0604 22:15:59.072650    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:15:59.130147    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:15:59.191378    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:15:59.251128    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:15:59.311138    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0604 22:15:59.369871    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:15:59.429361    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:15:59.483712    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:15:59.543770    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:15:59.599007    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:15:59.658768    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:15:59.719070    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0604 22:15:59.765039    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0604 22:15:59.803298    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0604 22:15:59.842209    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0604 22:15:59.879496    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0604 22:15:59.918288    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0604 22:15:59.958130    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0604 22:16:00.011512    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:16:00.037584    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:16:00.077750    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.086549    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.100135    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:16:00.125121    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:16:00.168074    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:16:00.205992    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.216370    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.232332    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:16:00.253143    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:16:00.295614    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:16:00.335800    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.344389    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.360026    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:16:00.387374    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:16:00.427632    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:16:00.436087    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:16:00.436372    4212 kubeadm.go:928] updating node {m02 172.20.128.86 8443 v1.30.1 docker true true} ...
	I0604 22:16:00.436372    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.128.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:16:00.436372    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:16:00.450759    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:16:00.491043    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:16:00.491043    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:16:00.506544    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:16:00.530861    4212 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 22:16:00.546165    4212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 22:16:00.577665    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0604 22:16:00.577946    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0604 22:16:00.577946    4212 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0604 22:16:01.728137    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:16:01.741385    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:16:01.753930    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 22:16:01.753930    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 22:16:01.957772    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:16:01.970768    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:16:01.993752    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 22:16:01.993752    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 22:16:02.988660    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:16:03.019626    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:16:03.035923    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:16:03.044039    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 22:16:03.044262    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 22:16:03.650703    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0604 22:16:03.707378    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0604 22:16:03.771992    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:16:03.814098    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0604 22:16:03.866479    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:16:03.875718    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:16:03.919473    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:16:04.179873    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:16:04.213285    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:16:04.214196    4212 start.go:316] joinCluster: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:16:04.214196    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 22:16:04.214196    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:16:06.666880    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:16:06.666951    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:16:06.666951    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:16:09.558709    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:16:09.558777    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:16:09.558777    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:16:09.781417    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.5671752s)
	I0604 22:16:09.781615    4212 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:16:09.781661    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7bil88.zjhz80y1hcigx5ai --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m02 --control-plane --apiserver-advertise-address=172.20.128.86 --apiserver-bind-port=8443"
	I0604 22:16:57.246013    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7bil88.zjhz80y1hcigx5ai --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m02 --control-plane --apiserver-advertise-address=172.20.128.86 --apiserver-bind-port=8443": (47.4639242s)
	I0604 22:16:57.246013    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 22:16:58.222878    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500-m02 minikube.k8s.io/updated_at=2024_06_04T22_16_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=false
	I0604 22:16:58.414292    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-609500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0604 22:16:58.640783    4212 start.go:318] duration metric: took 54.4260215s to joinCluster
	I0604 22:16:58.643252    4212 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:16:58.643753    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:16:58.647467    4212 out.go:177] * Verifying Kubernetes components...
	I0604 22:16:58.667201    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:16:59.150379    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:16:59.186083    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:16:59.186921    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0604 22:16:59.186921    4212 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.143.254:8443 with https://172.20.131.101:8443
	I0604 22:16:59.186921    4212 node_ready.go:35] waiting up to 6m0s for node "ha-609500-m02" to be "Ready" ...
	I0604 22:16:59.186921    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:16:59.186921    4212 round_trippers.go:469] Request Headers:
	I0604 22:16:59.186921    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:16:59.186921    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:16:59.210049    4212 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0604 22:16:59.687912    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:16:59.687987    4212 round_trippers.go:469] Request Headers:
	I0604 22:16:59.687987    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:16:59.687987    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:16:59.697655    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:00.194170    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:00.194423    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:00.194423    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:00.194423    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:00.227784    4212 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0604 22:17:00.701704    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:00.701704    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:00.701704    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:00.701704    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:00.711351    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:01.192919    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:01.192919    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:01.192919    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:01.192919    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:01.197895    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:01.199208    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:01.700979    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:01.701038    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:01.701038    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:01.701038    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:01.706713    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:02.195319    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:02.195319    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:02.195431    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:02.195431    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:02.200649    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:02.689135    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:02.689135    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:02.689135    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:02.689135    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:02.696181    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:03.196854    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:03.196954    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:03.196954    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:03.197051    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:03.202815    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:03.203815    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:03.695109    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:03.695109    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:03.695109    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:03.695109    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:03.700864    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:04.199276    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:04.199276    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:04.199276    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:04.199471    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:04.207973    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:17:04.701178    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:04.701178    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:04.701178    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:04.701178    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:04.705829    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:05.190383    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:05.190383    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:05.190383    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:05.190383    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:05.201887    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:17:05.692236    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:05.692447    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:05.692447    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:05.692447    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:05.698708    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:05.699718    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:06.192378    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:06.192468    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:06.192468    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:06.192535    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:06.204989    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:17:06.694235    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:06.694235    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:06.694235    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:06.694235    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:06.699692    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:07.192336    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:07.192400    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:07.192400    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:07.192400    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:07.197761    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:07.692693    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:07.692693    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:07.692693    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:07.692782    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:07.700092    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:07.701738    4212 node_ready.go:53] node "ha-609500-m02" has status "Ready":"False"
	I0604 22:17:08.193188    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:08.193507    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:08.193507    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:08.193507    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:08.201880    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:17:08.690715    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:08.690715    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:08.690715    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:08.690715    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:08.697376    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:09.190763    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:09.190862    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:09.190862    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:09.190862    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:09.195300    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:09.689887    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:09.690196    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:09.690268    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:09.690268    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:09.696352    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:10.192285    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.192375    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.192375    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.192375    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.199624    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:10.200672    4212 node_ready.go:49] node "ha-609500-m02" has status "Ready":"True"
	I0604 22:17:10.200672    4212 node_ready.go:38] duration metric: took 11.0136611s for node "ha-609500-m02" to be "Ready" ...
	I0604 22:17:10.200672    4212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:17:10.200672    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:10.200672    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.200672    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.200672    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.211622    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:10.219615    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.219615    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r68pn
	I0604 22:17:10.219615    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.219615    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.219615    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.226008    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:10.226876    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.226966    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.226966    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.226966    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.237753    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:10.238801    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.238801    4212 pod_ready.go:81] duration metric: took 19.1858ms for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.238801    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.238801    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zlxf9
	I0604 22:17:10.238801    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.238801    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.238801    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.242759    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:17:10.244277    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.244277    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.244342    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.244342    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.249763    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.250409    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.250470    4212 pod_ready.go:81] duration metric: took 11.669ms for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.250470    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.250600    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500
	I0604 22:17:10.250600    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.250652    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.250652    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.255754    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:10.256762    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:10.256762    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.256762    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.256762    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.261761    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.261761    4212 pod_ready.go:92] pod "etcd-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:10.261761    4212 pod_ready.go:81] duration metric: took 11.2905ms for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.261761    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:10.261761    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:10.262785    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.262785    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.262785    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.268764    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:10.269602    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.269602    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.269602    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.269602    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.274208    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:10.771269    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:10.771269    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.771357    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.771357    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.780548    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:17:10.782247    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:10.782273    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:10.782273    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:10.782273    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:10.787298    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:11.272412    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:11.272412    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.272412    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.272412    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.279009    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:11.280932    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:11.280932    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.280932    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.280932    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.286253    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:11.773440    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:11.773440    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.773440    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.773740    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.781369    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:11.782141    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:11.782141    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:11.782141    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:11.782141    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:11.787848    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.262761    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:17:12.262816    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.262816    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.262816    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.267374    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:17:12.268337    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.268417    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.268417    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.268417    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.273611    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.274384    4212 pod_ready.go:92] pod "etcd-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.274457    4212 pod_ready.go:81] duration metric: took 2.0126799s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.274517    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.274569    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:17:12.274569    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.274631    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.274631    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.279890    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.281362    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:12.281362    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.281362    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.281362    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.301490    4212 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0604 22:17:12.302614    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.302614    4212 pod_ready.go:81] duration metric: took 28.0959ms for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.302679    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.405617    4212 request.go:629] Waited for 102.5527ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:17:12.405741    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:17:12.405741    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.405883    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.405883    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.413274    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:12.594093    4212 request.go:629] Waited for 179.6756ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.594332    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:12.594398    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.594398    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.594398    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.600050    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:12.602171    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:12.602171    4212 pod_ready.go:81] duration metric: took 299.4892ms for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.602171    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:12.795792    4212 request.go:629] Waited for 193.3457ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:17:12.796133    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:17:12.796133    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:12.796133    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:12.796133    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:12.803958    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:13.000294    4212 request.go:629] Waited for 195.4506ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.000593    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.000593    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.000593    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.000593    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.007281    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:13.008476    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.008476    4212 pod_ready.go:81] duration metric: took 406.3018ms for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.008476    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.203245    4212 request.go:629] Waited for 194.7677ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:17:13.203388    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:17:13.203570    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.203570    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.203652    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.209765    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:13.406909    4212 request.go:629] Waited for 196.2426ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:13.406909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:13.406909    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.406909    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.406909    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.412813    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:13.413871    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.413871    4212 pod_ready.go:81] duration metric: took 405.3923ms for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.413871    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.593703    4212 request.go:629] Waited for 179.5938ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:17:13.593773    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:17:13.593773    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.593773    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.593773    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.601378    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:13.797459    4212 request.go:629] Waited for 194.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.797676    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:13.797676    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:13.797676    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:13.797676    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:13.803310    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:13.803495    4212 pod_ready.go:92] pod "kube-proxy-4ppxq" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:13.804023    4212 pod_ready.go:81] duration metric: took 390.1485ms for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:13.804023    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.001967    4212 request.go:629] Waited for 197.7062ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:17:14.002240    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:17:14.002329    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.002329    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.002329    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.009570    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:14.207109    4212 request.go:629] Waited for 196.5663ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:14.207230    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:14.207230    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.207230    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.207230    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.213965    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:14.215778    4212 pod_ready.go:92] pod "kube-proxy-fnjrb" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:14.215778    4212 pod_ready.go:81] duration metric: took 411.7517ms for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.215855    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.397073    4212 request.go:629] Waited for 180.9858ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:17:14.397471    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:17:14.397471    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.397471    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.397471    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.402648    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:14.603916    4212 request.go:629] Waited for 200.1243ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:14.603916    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:17:14.603916    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.603916    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.603916    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.615931    4212 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 22:17:14.616895    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:14.616895    4212 pod_ready.go:81] duration metric: took 401.0374ms for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.616895    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:14.797370    4212 request.go:629] Waited for 180.4733ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:17:14.797773    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:17:14.797773    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:14.797773    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:14.797773    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:14.803773    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:17:15.002138    4212 request.go:629] Waited for 197.9436ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:15.002438    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:17:15.002438    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.002500    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.002500    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.009674    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:15.010461    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:17:15.010461    4212 pod_ready.go:81] duration metric: took 393.5623ms for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:17:15.010461    4212 pod_ready.go:38] duration metric: took 4.8097504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:17:15.010604    4212 api_server.go:52] waiting for apiserver process to appear ...
	I0604 22:17:15.022686    4212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 22:17:15.056930    4212 api_server.go:72] duration metric: took 16.4133166s to wait for apiserver process to appear ...
	I0604 22:17:15.056930    4212 api_server.go:88] waiting for apiserver healthz status ...
	I0604 22:17:15.056930    4212 api_server.go:253] Checking apiserver healthz at https://172.20.131.101:8443/healthz ...
	I0604 22:17:15.064716    4212 api_server.go:279] https://172.20.131.101:8443/healthz returned 200:
	ok
	I0604 22:17:15.065474    4212 round_trippers.go:463] GET https://172.20.131.101:8443/version
	I0604 22:17:15.065716    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.065716    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.065716    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.066766    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:17:15.067704    4212 api_server.go:141] control plane version: v1.30.1
	I0604 22:17:15.067766    4212 api_server.go:131] duration metric: took 10.7739ms to wait for apiserver health ...
	I0604 22:17:15.067793    4212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 22:17:15.196808    4212 request.go:629] Waited for 128.9349ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.197297    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.197383    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.197383    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.197444    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.208819    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:17:15.216149    4212 system_pods.go:59] 17 kube-system pods found
	I0604 22:17:15.216149    4212 system_pods.go:61] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:17:15.216149    4212 system_pods.go:61] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:17:15.216149    4212 system_pods.go:74] duration metric: took 148.3547ms to wait for pod list to return data ...
	I0604 22:17:15.216149    4212 default_sa.go:34] waiting for default service account to be created ...
	I0604 22:17:15.404541    4212 request.go:629] Waited for 188.1443ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:17:15.404541    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:17:15.404541    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.404680    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.404680    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.411164    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:17:15.411795    4212 default_sa.go:45] found service account: "default"
	I0604 22:17:15.411795    4212 default_sa.go:55] duration metric: took 195.6444ms for default service account to be created ...
	I0604 22:17:15.411871    4212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 22:17:15.592797    4212 request.go:629] Waited for 180.5579ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.592797    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:17:15.592797    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.593102    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.593102    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.603158    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:17:15.614390    4212 system_pods.go:86] 17 kube-system pods found
	I0604 22:17:15.615948    4212 system_pods.go:89] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:17:15.616032    4212 system_pods.go:89] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:17:15.616095    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:17:15.616160    4212 system_pods.go:89] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:17:15.616225    4212 system_pods.go:126] duration metric: took 204.352ms to wait for k8s-apps to be running ...
	I0604 22:17:15.616278    4212 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 22:17:15.628979    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:17:15.656779    4212 system_svc.go:56] duration metric: took 40.5296ms WaitForService to wait for kubelet
	I0604 22:17:15.656903    4212 kubeadm.go:576] duration metric: took 17.0134285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:17:15.657001    4212 node_conditions.go:102] verifying NodePressure condition ...
	I0604 22:17:15.797534    4212 request.go:629] Waited for 140.1458ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes
	I0604 22:17:15.797534    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes
	I0604 22:17:15.797534    4212 round_trippers.go:469] Request Headers:
	I0604 22:17:15.797534    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:17:15.797534    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:17:15.804653    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:17:15.808049    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:17:15.808049    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:17:15.808049    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:17:15.808049    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:17:15.808049    4212 node_conditions.go:105] duration metric: took 151.0473ms to run NodePressure ...
	I0604 22:17:15.808049    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:17:15.808234    4212 start.go:254] writing updated cluster config ...
	I0604 22:17:15.811937    4212 out.go:177] 
	I0604 22:17:15.834811    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:17:15.835160    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:17:15.842246    4212 out.go:177] * Starting "ha-609500-m03" control-plane node in "ha-609500" cluster
	I0604 22:17:15.848542    4212 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 22:17:15.848542    4212 cache.go:56] Caching tarball of preloaded images
	I0604 22:17:15.849390    4212 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 22:17:15.849584    4212 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 22:17:15.849816    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:17:15.856035    4212 start.go:360] acquireMachinesLock for ha-609500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 22:17:15.856035    4212 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-609500-m03"
	I0604 22:17:15.856582    4212 start.go:93] Provisioning new machine with config: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:17:15.856737    4212 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0604 22:17:15.859037    4212 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 22:17:15.859892    4212 start.go:159] libmachine.API.Create for "ha-609500" (driver="hyperv")
	I0604 22:17:15.859934    4212 client.go:168] LocalClient.Create starting
	I0604 22:17:15.860111    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 22:17:15.860745    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:17:15.860745    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:17:15.860992    4212 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 22:17:15.860992    4212 main.go:141] libmachine: Decoding PEM data...
	I0604 22:17:15.861220    4212 main.go:141] libmachine: Parsing certificate...
	I0604 22:17:15.861289    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 22:17:18.016963    4212 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 22:17:18.016963    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:18.017782    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 22:17:19.970477    4212 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 22:17:19.970477    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:19.971416    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:21.653518    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:17:25.950787    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:17:25.951607    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:25.953809    4212 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 22:17:26.438118    4212 main.go:141] libmachine: Creating SSH key...
	I0604 22:17:26.772639    4212 main.go:141] libmachine: Creating VM...
	I0604 22:17:26.772639    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:30.092345    4212 main.go:141] libmachine: Using switch "Default Switch"
	I0604 22:17:30.092345    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:32.067870    4212 main.go:141] libmachine: Creating VHD
	I0604 22:17:32.067870    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 22:17:36.743323    4212 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 81B54312-716D-4BDF-B061-C6E0D21F153B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 22:17:36.743595    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:36.743595    4212 main.go:141] libmachine: Writing magic tar header
	I0604 22:17:36.743595    4212 main.go:141] libmachine: Writing SSH key tar header
	I0604 22:17:36.756623    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:40.216439    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd' -SizeBytes 20000MB
	I0604 22:17:43.035011    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:43.036052    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:43.036205    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-609500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:47.193661    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-609500-m03 -DynamicMemoryEnabled $false
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:49.778655    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-609500-m03 -Count 2
	I0604 22:17:52.268998    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:52.268998    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:52.269303    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\boot2docker.iso'
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:55.200960    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-609500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\disk.vhd'
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:17:58.219958    4212 main.go:141] libmachine: Starting VM...
	I0604 22:17:58.219958    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-609500-m03
	I0604 22:18:01.639702    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:01.640741    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:01.640741    4212 main.go:141] libmachine: Waiting for host to start...
	I0604 22:18:01.640741    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:04.217573    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:07.089391    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:07.089391    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:08.101634    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:10.565426    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:10.572139    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:10.572139    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:13.348244    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:13.348244    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:14.358758    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:16.743312    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:16.748999    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:16.748999    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:19.503540    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:19.506899    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:20.518435    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:22.928253    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:25.688092    4212 main.go:141] libmachine: [stdout =====>] : 
	I0604 22:18:25.688092    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:26.703500    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:29.176259    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:32.046549    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:34.376539    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:34.389830    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:34.389899    4212 machine.go:94] provisionDockerMachine start ...
	I0604 22:18:34.389899    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:36.782927    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:39.575206    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:39.575206    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:39.579720    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:39.594078    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:39.594078    4212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 22:18:39.729162    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 22:18:39.729162    4212 buildroot.go:166] provisioning hostname "ha-609500-m03"
	I0604 22:18:39.729162    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:42.031451    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:42.031451    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:42.043815    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:44.836817    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:44.847200    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:44.854009    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:44.854311    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:44.854311    4212 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-609500-m03 && echo "ha-609500-m03" | sudo tee /etc/hostname
	I0604 22:18:45.021050    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-609500-m03
	
	I0604 22:18:45.021131    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:47.332759    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:47.332987    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:47.332987    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:50.067376    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:50.067376    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:50.075072    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:18:50.075072    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:18:50.075072    4212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-609500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-609500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-609500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 22:18:50.224074    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 22:18:50.224074    4212 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 22:18:50.224074    4212 buildroot.go:174] setting up certificates
	I0604 22:18:50.224211    4212 provision.go:84] configureAuth start
	I0604 22:18:50.224211    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:52.539972    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:18:55.298606    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:18:55.298685    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:55.298994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:18:57.628356    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:00.387606    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:00.399609    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:00.399689    4212 provision.go:143] copyHostCerts
	I0604 22:19:00.399988    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 22:19:00.400328    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 22:19:00.400328    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 22:19:00.401153    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 22:19:00.402422    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 22:19:00.403237    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 22:19:00.403348    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 22:19:00.403648    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 22:19:00.404307    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 22:19:00.404843    4212 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 22:19:00.404843    4212 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 22:19:00.405260    4212 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 22:19:00.406351    4212 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-609500-m03 san=[127.0.0.1 172.20.138.190 ha-609500-m03 localhost minikube]
	I0604 22:19:00.852655    4212 provision.go:177] copyRemoteCerts
	I0604 22:19:00.879864    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 22:19:00.879864    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:03.179424    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:05.968956    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:05.970403    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:05.970403    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:06.085194    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2052892s)
	I0604 22:19:06.085321    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 22:19:06.085897    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0604 22:19:06.137417    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 22:19:06.138044    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 22:19:06.190563    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 22:19:06.191140    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 22:19:06.250566    4212 provision.go:87] duration metric: took 16.0262271s to configureAuth
	I0604 22:19:06.250566    4212 buildroot.go:189] setting minikube options for container-runtime
	I0604 22:19:06.251500    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:19:06.251500    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:08.541517    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:08.541517    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:08.541648    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:11.305268    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:11.316493    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:11.324839    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:11.325508    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:11.325508    4212 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 22:19:11.466380    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 22:19:11.466380    4212 buildroot.go:70] root file system type: tmpfs
	I0604 22:19:11.466380    4212 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 22:19:11.466934    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:13.757387    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:13.757387    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:13.768947    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:16.568888    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:16.568888    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:16.575237    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:16.575237    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:16.575237    4212 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.131.101"
	Environment="NO_PROXY=172.20.131.101,172.20.128.86"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 22:19:16.741579    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.131.101
	Environment=NO_PROXY=172.20.131.101,172.20.128.86
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 22:19:16.741855    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:19.046101    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:19.060818    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:19.060818    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:21.850153    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:21.861385    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:21.867169    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:21.868026    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:21.868149    4212 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 22:19:24.113556    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 22:19:24.113556    4212 machine.go:97] duration metric: took 49.7232595s to provisionDockerMachine
	I0604 22:19:24.113556    4212 client.go:171] duration metric: took 2m8.2525918s to LocalClient.Create
	I0604 22:19:24.113700    4212 start.go:167] duration metric: took 2m8.2527778s to libmachine.API.Create "ha-609500"
	I0604 22:19:24.113700    4212 start.go:293] postStartSetup for "ha-609500-m03" (driver="hyperv")
	I0604 22:19:24.113700    4212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 22:19:24.127305    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 22:19:24.127305    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:26.413451    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:29.219933    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:29.219933    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:29.219933    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:29.345343    4212 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.217923s)
	I0604 22:19:29.356847    4212 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 22:19:29.367528    4212 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 22:19:29.367622    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 22:19:29.368323    4212 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 22:19:29.369392    4212 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 22:19:29.369392    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 22:19:29.380511    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 22:19:29.402994    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 22:19:29.457650    4212 start.go:296] duration metric: took 5.3439076s for postStartSetup
	I0604 22:19:29.460486    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:31.799369    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:31.811156    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:31.811156    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:34.635575    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:34.635663    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:34.636323    4212 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\config.json ...
	I0604 22:19:34.639081    4212 start.go:128] duration metric: took 2m18.7812291s to createHost
	I0604 22:19:34.639081    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:36.957034    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:36.957277    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:36.957277    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:39.726440    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:39.726440    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:39.732838    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:39.732984    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:39.733570    4212 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 22:19:39.872822    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717539579.879690082
	
	I0604 22:19:39.872977    4212 fix.go:216] guest clock: 1717539579.879690082
	I0604 22:19:39.872977    4212 fix.go:229] Guest: 2024-06-04 22:19:39.879690082 +0000 UTC Remote: 2024-06-04 22:19:34.6390814 +0000 UTC m=+621.179354901 (delta=5.240608682s)
	I0604 22:19:39.873095    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:42.205185    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:42.205185    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:42.217825    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:44.986647    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:44.986647    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:45.004810    4212 main.go:141] libmachine: Using SSH client type: native
	I0604 22:19:45.005387    4212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.138.190 22 <nil> <nil>}
	I0604 22:19:45.005387    4212 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717539579
	I0604 22:19:45.159494    4212 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 22:19:39 UTC 2024
	
	I0604 22:19:45.159494    4212 fix.go:236] clock set: Tue Jun  4 22:19:39 UTC 2024
	 (err=<nil>)
	I0604 22:19:45.159619    4212 start.go:83] releasing machines lock for "ha-609500-m03", held for 2m29.3023864s
	I0604 22:19:45.159761    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:47.493325    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:47.502994    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:47.502994    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:50.257860    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:50.257860    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:50.273017    4212 out.go:177] * Found network options:
	I0604 22:19:50.283263    4212 out.go:177]   - NO_PROXY=172.20.131.101,172.20.128.86
	W0604 22:19:50.286498    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.287069    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:19:50.289289    4212 out.go:177]   - NO_PROXY=172.20.131.101,172.20.128.86
	W0604 22:19:50.293476    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.293476    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.294961    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 22:19:50.295047    4212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 22:19:50.298563    4212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 22:19:50.298688    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:50.307138    4212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 22:19:50.307138    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500-m03 ).state
	I0604 22:19:52.663975    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:52.664269    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:52.664332    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:52.665508    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:19:52.665629    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:52.665629    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 22:19:55.503539    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:55.515282    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:55.515689    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:55.532080    4212 main.go:141] libmachine: [stdout =====>] : 172.20.138.190
	
	I0604 22:19:55.532080    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:19:55.532680    4212 sshutil.go:53] new ssh client: &{IP:172.20.138.190 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500-m03\id_rsa Username:docker}
	I0604 22:19:55.621432    4212 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3141393s)
	W0604 22:19:55.621543    4212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 22:19:55.636501    4212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 22:19:55.696426    4212 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 22:19:55.696558    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:19:55.696426    4212 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3977846s)
	I0604 22:19:55.696758    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:19:55.752022    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 22:19:55.793637    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 22:19:55.817754    4212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 22:19:55.836562    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 22:19:55.869709    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:19:55.905012    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 22:19:55.937060    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 22:19:55.973824    4212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 22:19:56.012148    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 22:19:56.045272    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 22:19:56.080610    4212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 22:19:56.118203    4212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 22:19:56.149240    4212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 22:19:56.184479    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:19:56.385530    4212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 22:19:56.421914    4212 start.go:494] detecting cgroup driver to use...
	I0604 22:19:56.435861    4212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 22:19:56.480470    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:19:56.522280    4212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 22:19:56.569398    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 22:19:56.612881    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:19:56.654195    4212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 22:19:56.721171    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 22:19:56.747194    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 22:19:56.796409    4212 ssh_runner.go:195] Run: which cri-dockerd
	I0604 22:19:56.818684    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 22:19:56.845534    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 22:19:56.894900    4212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 22:19:57.112407    4212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 22:19:57.321941    4212 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 22:19:57.326112    4212 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 22:19:57.384590    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:19:57.601077    4212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 22:20:00.153624    4212 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5525266s)
	I0604 22:20:00.167916    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 22:20:00.209323    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:20:00.251128    4212 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 22:20:00.461529    4212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 22:20:00.674098    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:00.898757    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 22:20:00.946907    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 22:20:00.988346    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:01.216717    4212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 22:20:01.340371    4212 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 22:20:01.353399    4212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 22:20:01.362906    4212 start.go:562] Will wait 60s for crictl version
	I0604 22:20:01.372779    4212 ssh_runner.go:195] Run: which crictl
	I0604 22:20:01.398782    4212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 22:20:01.465797    4212 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 22:20:01.476373    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:20:01.521856    4212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 22:20:01.562379    4212 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 22:20:01.565177    4212 out.go:177]   - env NO_PROXY=172.20.131.101
	I0604 22:20:01.567782    4212 out.go:177]   - env NO_PROXY=172.20.131.101,172.20.128.86
	I0604 22:20:01.569825    4212 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 22:20:01.572673    4212 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 22:20:01.575844    4212 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 22:20:01.575844    4212 ip.go:210] interface addr: 172.20.128.1/20
	I0604 22:20:01.589382    4212 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 22:20:01.596412    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:20:01.620012    4212 mustload.go:65] Loading cluster: ha-609500
	I0604 22:20:01.620897    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:20:01.621115    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:03.936853    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:03.949554    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:03.949647    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:20:03.949949    4212 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500 for IP: 172.20.138.190
	I0604 22:20:03.949949    4212 certs.go:194] generating shared ca certs ...
	I0604 22:20:03.949949    4212 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:03.950898    4212 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 22:20:03.951176    4212 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 22:20:03.951421    4212 certs.go:256] generating profile certs ...
	I0604 22:20:03.952091    4212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\client.key
	I0604 22:20:03.952175    4212 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1
	I0604 22:20:03.952328    4212 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.131.101 172.20.128.86 172.20.138.190 172.20.143.254]
	I0604 22:20:04.222840    4212 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 ...
	I0604 22:20:04.222840    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1: {Name:mk10e3a4dacee8587b1af1c89003e8c486ec29a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:04.232879    4212 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1 ...
	I0604 22:20:04.232879    4212 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1: {Name:mk01b58f546b2519a6aab4b1ecb91801a6947cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 22:20:04.233443    4212 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt.0c25c7e1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt
	I0604 22:20:04.244243    4212 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key.0c25c7e1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key
	I0604 22:20:04.246796    4212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key
	I0604 22:20:04.246796    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 22:20:04.247850    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 22:20:04.247953    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 22:20:04.248537    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 22:20:04.248579    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 22:20:04.248870    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 22:20:04.249234    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 22:20:04.249691    4212 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 22:20:04.249732    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 22:20:04.249982    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 22:20:04.250462    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 22:20:04.250665    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 22:20:04.250894    4212 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 22:20:04.250894    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:04.251518    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 22:20:04.251780    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 22:20:04.252050    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:06.588272    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:20:09.360026    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:20:09.360026    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:09.360898    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:20:09.461548    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0604 22:20:09.469270    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0604 22:20:09.503950    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0604 22:20:09.510557    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0604 22:20:09.550240    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0604 22:20:09.560149    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0604 22:20:09.596920    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0604 22:20:09.606555    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0604 22:20:09.641751    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0604 22:20:09.649999    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0604 22:20:09.684273    4212 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0604 22:20:09.692229    4212 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0604 22:20:09.713270    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 22:20:09.761078    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 22:20:09.810381    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 22:20:09.864999    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 22:20:09.918456    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0604 22:20:09.981883    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0604 22:20:10.040937    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 22:20:10.095167    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-609500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0604 22:20:10.156074    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 22:20:10.206754    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 22:20:10.257371    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 22:20:10.307878    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0604 22:20:10.342725    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0604 22:20:10.377622    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0604 22:20:10.414202    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0604 22:20:10.449312    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0604 22:20:10.485650    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0604 22:20:10.522275    4212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0604 22:20:10.568613    4212 ssh_runner.go:195] Run: openssl version
	I0604 22:20:10.592350    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 22:20:10.629394    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.637578    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.649327    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 22:20:10.677162    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 22:20:10.714204    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 22:20:10.747727    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.756578    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.771672    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 22:20:10.796870    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 22:20:10.832848    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 22:20:10.867279    4212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.875875    4212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.893326    4212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 22:20:10.917107    4212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 22:20:10.953524    4212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 22:20:10.963180    4212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 22:20:10.963705    4212 kubeadm.go:928] updating node {m03 172.20.138.190 8443 v1.30.1 docker true true} ...
	I0604 22:20:10.963991    4212 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-609500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.138.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 22:20:10.964078    4212 kube-vip.go:115] generating kube-vip config ...
	I0604 22:20:10.976416    4212 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0604 22:20:11.005894    4212 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0604 22:20:11.005894    4212 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0604 22:20:11.019270    4212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 22:20:11.035691    4212 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 22:20:11.047718    4212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 22:20:11.069777    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0604 22:20:11.070002    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0604 22:20:11.069777    4212 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0604 22:20:11.070285    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:20:11.070133    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:20:11.086586    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 22:20:11.086586    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:20:11.088547    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 22:20:11.095652    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 22:20:11.095813    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 22:20:11.134796    4212 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:20:11.134796    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 22:20:11.134956    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 22:20:11.150016    4212 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 22:20:11.196723    4212 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 22:20:11.197065    4212 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 22:20:12.583443    4212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0604 22:20:12.604101    4212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0604 22:20:12.639649    4212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 22:20:12.674681    4212 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0604 22:20:12.724903    4212 ssh_runner.go:195] Run: grep 172.20.143.254	control-plane.minikube.internal$ /etc/hosts
	I0604 22:20:12.734124    4212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 22:20:12.774188    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:20:12.991005    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:20:13.026934    4212 host.go:66] Checking if "ha-609500" exists ...
	I0604 22:20:13.027749    4212 start.go:316] joinCluster: &{Name:ha-609500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-609500 Namespace:default APIServerHAVIP:172.20.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.131.101 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.128.86 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 22:20:13.027966    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 22:20:13.027966    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-609500 ).state
	I0604 22:20:15.416063    4212 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 22:20:15.427680    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:15.427778    4212 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-609500 ).networkadapters[0]).ipaddresses[0]
	I0604 22:20:18.214038    4212 main.go:141] libmachine: [stdout =====>] : 172.20.131.101
	
	I0604 22:20:18.214038    4212 main.go:141] libmachine: [stderr =====>] : 
	I0604 22:20:18.227310    4212 sshutil.go:53] new ssh client: &{IP:172.20.131.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-609500\id_rsa Username:docker}
	I0604 22:20:18.463986    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.4359772s)
	I0604 22:20:18.463986    4212 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:20:18.463986    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jx6qm8.mzmojl3pfbuz827c --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m03 --control-plane --apiserver-advertise-address=172.20.138.190 --apiserver-bind-port=8443"
	I0604 22:21:03.632199    4212 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jx6qm8.mzmojl3pfbuz827c --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-609500-m03 --control-plane --apiserver-advertise-address=172.20.138.190 --apiserver-bind-port=8443": (45.1678572s)
	I0604 22:21:03.632199    4212 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 22:21:04.602789    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-609500-m03 minikube.k8s.io/updated_at=2024_06_04T22_21_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=ha-609500 minikube.k8s.io/primary=false
	I0604 22:21:05.083489    4212 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-609500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0604 22:21:05.264102    4212 start.go:318] duration metric: took 52.2359416s to joinCluster
	I0604 22:21:05.264431    4212 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.20.138.190 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 22:21:05.268143    4212 out.go:177] * Verifying Kubernetes components...
	I0604 22:21:05.265648    4212 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:21:05.283509    4212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 22:21:05.730998    4212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 22:21:05.784578    4212 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:21:05.784578    4212 kapi.go:59] client config for ha-609500: &rest.Config{Host:"https://172.20.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-609500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0604 22:21:05.784578    4212 kubeadm.go:477] Overriding stale ClientConfig host https://172.20.143.254:8443 with https://172.20.131.101:8443
	I0604 22:21:05.786260    4212 node_ready.go:35] waiting up to 6m0s for node "ha-609500-m03" to be "Ready" ...
	I0604 22:21:05.786455    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:05.786455    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:05.786455    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:05.786455    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:05.806924    4212 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 22:21:06.298213    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:06.298213    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:06.298213    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:06.298213    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:06.305051    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:06.796635    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:06.796733    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:06.796733    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:06.796733    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:06.804662    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:21:07.297865    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:07.297865    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:07.297865    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:07.297865    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:07.302670    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:07.799840    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:07.800046    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:07.800046    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:07.800046    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:07.805636    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:07.806392    4212 node_ready.go:53] node "ha-609500-m03" has status "Ready":"False"
	I0604 22:21:08.298024    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:08.298024    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:08.298024    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:08.298024    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:08.418517    4212 round_trippers.go:574] Response Status: 200 OK in 120 milliseconds
	I0604 22:21:08.794778    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:08.794778    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:08.794778    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:08.794778    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:08.800217    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:09.296395    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:09.296395    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:09.296513    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:09.296513    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:09.296820    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:09.800120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:09.800120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:09.800120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:09.800120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:09.800884    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:09.807493    4212 node_ready.go:53] node "ha-609500-m03" has status "Ready":"False"
	I0604 22:21:10.288008    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:10.288008    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:10.288008    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:10.288008    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:10.288552    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:10.798569    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:10.798569    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:10.798569    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:10.798569    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:10.803854    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:11.306200    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:11.306200    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.306200    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.306200    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.306761    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.792415    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:11.792534    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.792534    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.792534    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.792819    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.798258    4212 node_ready.go:49] node "ha-609500-m03" has status "Ready":"True"
	I0604 22:21:11.798358    4212 node_ready.go:38] duration metric: took 6.011951s for node "ha-609500-m03" to be "Ready" ...
	I0604 22:21:11.798358    4212 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:21:11.798538    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:11.798538    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.798538    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.798538    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.824279    4212 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0604 22:21:11.840924    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.840924    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r68pn
	I0604 22:21:11.840924    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.841488    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.841488    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.847420    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:11.848349    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.848349    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.848349    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.848349    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.853687    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:11.854843    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.854843    4212 pod_ready.go:81] duration metric: took 13.9189ms for pod "coredns-7db6d8ff4d-r68pn" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.854843    4212 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.855179    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-zlxf9
	I0604 22:21:11.855253    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.855253    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.855253    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.859775    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.861207    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.861207    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.861207    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.861207    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.861734    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.865664    4212 pod_ready.go:92] pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.865664    4212 pod_ready.go:81] duration metric: took 10.8201ms for pod "coredns-7db6d8ff4d-zlxf9" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.865664    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.865664    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500
	I0604 22:21:11.865664    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.865664    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.865664    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.870430    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:11.871620    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:11.871678    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.871678    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.871678    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.875250    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:21:11.876098    4212 pod_ready.go:92] pod "etcd-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.876098    4212 pod_ready.go:81] duration metric: took 10.4346ms for pod "etcd-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.876098    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.876098    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m02
	I0604 22:21:11.876098    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.876098    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.876098    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.876723    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:11.882006    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:11.882006    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.882552    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.882552    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:11.904727    4212 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0604 22:21:11.905266    4212 pod_ready.go:92] pod "etcd-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:11.905266    4212 pod_ready.go:81] duration metric: took 29.1673ms for pod "etcd-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.905266    4212 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:11.995852    4212 request.go:629] Waited for 89.8851ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:11.995852    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:11.995852    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:11.995852    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:11.995852    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.002111    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:12.198529    4212 request.go:629] Waited for 195.5625ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.198529    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.198529    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.198529    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.198529    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.204359    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:12.413883    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:12.414010    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.414129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.414129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.414364    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:12.599281    4212 request.go:629] Waited for 179.2562ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.599457    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.599457    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.599542    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.599542    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.600255    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:12.906854    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/etcd-ha-609500-m03
	I0604 22:21:12.906854    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.906854    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.906854    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:12.912387    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:12.999791    4212 request.go:629] Waited for 86.4028ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.999791    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:12.999791    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:12.999791    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:12.999791    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.001813    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:13.006826    4212 pod_ready.go:92] pod "etcd-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.006826    4212 pod_ready.go:81] duration metric: took 1.1015518s for pod "etcd-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.006826    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.213243    4212 request.go:629] Waited for 206.2963ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:21:13.213530    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500
	I0604 22:21:13.213530    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.213530    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.213626    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.214244    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:13.401588    4212 request.go:629] Waited for 179.1495ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:13.401663    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:13.401663    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.401663    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.401663    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.416157    4212 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 22:21:13.417245    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.417245    4212 pod_ready.go:81] duration metric: took 410.4156ms for pod "kube-apiserver-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.417245    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.597382    4212 request.go:629] Waited for 179.9072ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:21:13.597546    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m02
	I0604 22:21:13.597546    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.597546    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.597546    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.604877    4212 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 22:21:13.793118    4212 request.go:629] Waited for 186.8701ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:13.793291    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:13.793291    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:13.793291    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:13.793291    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:13.793987    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:13.799123    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:13.799243    4212 pod_ready.go:81] duration metric: took 381.9947ms for pod "kube-apiserver-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:13.799243    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:14.006171    4212 request.go:629] Waited for 206.7415ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.006276    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.006276    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.006276    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.006356    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.006536    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:14.206100    4212 request.go:629] Waited for 191.4948ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.206100    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.206100    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.206100    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.206100    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.208482    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:14.403445    4212 request.go:629] Waited for 89.7208ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.403580    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.403580    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.403580    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.403580    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.407972    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:14.604368    4212 request.go:629] Waited for 196.2793ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.604495    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:14.604550    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.604550    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.604550    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.625737    4212 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0604 22:21:14.810754    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:14.810754    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:14.810754    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:14.810754    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:14.811278    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.018120    4212 request.go:629] Waited for 199.3575ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.018120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.018120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.018120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.018120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.023938    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:15.309397    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:15.309653    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.309653    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.309653    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.310201    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.400406    4212 request.go:629] Waited for 82.2411ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.400406    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.400406    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.400406    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.400406    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.401139    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:15.813243    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:15.813243    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.813243    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.813243    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.818579    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:15.819951    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:15.819951    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:15.819951    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:15.819951    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:15.826367    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:15.827528    4212 pod_ready.go:102] pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace has status "Ready":"False"
	I0604 22:21:16.301129    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:16.301129    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.301129    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.301129    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.308116    4212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 22:21:16.309451    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:16.309451    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.309451    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.309451    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.312948    4212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 22:21:16.803178    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:16.803178    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.803178    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.803178    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.812310    4212 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 22:21:16.813108    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:16.813108    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:16.813108    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:16.813108    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:16.816085    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:17.312731    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-609500-m03
	I0604 22:21:17.312731    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.312731    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.312731    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.321414    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:17.324363    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:17.324448    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.324448    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.324448    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.327430    4212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 22:21:17.332264    4212 pod_ready.go:92] pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.332320    4212 pod_ready.go:81] duration metric: took 3.5330495s for pod "kube-apiserver-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.332320    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.332431    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500
	I0604 22:21:17.332490    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.332490    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.332539    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.333000    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:17.404977    4212 request.go:629] Waited for 67.1812ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:17.405184    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:17.405184    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.405184    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.405184    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.405850    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:17.412283    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.412283    4212 pod_ready.go:81] duration metric: took 79.9631ms for pod "kube-controller-manager-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.412283    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.603229    4212 request.go:629] Waited for 190.7485ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:21:17.603909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m02
	I0604 22:21:17.603909    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.604011    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.604011    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.609529    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:17.800300    4212 request.go:629] Waited for 190.1319ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:17.800909    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:17.800909    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:17.800909    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:17.800909    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:17.809106    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:21:17.809714    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:17.809714    4212 pod_ready.go:81] duration metric: took 397.4278ms for pod "kube-controller-manager-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:17.809714    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.003120    4212 request.go:629] Waited for 193.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m03
	I0604 22:21:18.003120    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-609500-m03
	I0604 22:21:18.003120    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.003120    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.003120    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.003678    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.198781    4212 request.go:629] Waited for 188.9792ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:18.198781    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:18.198781    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.198781    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.198781    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.199317    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.207545    4212 pod_ready.go:92] pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:18.207545    4212 pod_ready.go:81] duration metric: took 397.8278ms for pod "kube-controller-manager-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.207848    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.406090    4212 request.go:629] Waited for 197.9927ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:21:18.406090    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ppxq
	I0604 22:21:18.406090    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.406090    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.406090    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.406554    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:18.603123    4212 request.go:629] Waited for 196.5256ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:18.603123    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:18.603123    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.603123    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.603371    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.611599    4212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 22:21:18.612557    4212 pod_ready.go:92] pod "kube-proxy-4ppxq" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:18.612557    4212 pod_ready.go:81] duration metric: took 404.7052ms for pod "kube-proxy-4ppxq" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.612557    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:18.800312    4212 request.go:629] Waited for 187.5633ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:21:18.800486    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnjrb
	I0604 22:21:18.800486    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:18.800486    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:18.800486    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:18.801136    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.011197    4212 request.go:629] Waited for 203.8766ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:19.011810    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:19.011907    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.011907    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.011907    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.012453    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.018359    4212 pod_ready.go:92] pod "kube-proxy-fnjrb" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.018359    4212 pod_ready.go:81] duration metric: took 405.7989ms for pod "kube-proxy-fnjrb" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.018910    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqpzs" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.197035    4212 request.go:629] Waited for 177.8832ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqpzs
	I0604 22:21:19.197097    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mqpzs
	I0604 22:21:19.197097    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.197097    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.197097    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.203782    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.407628    4212 request.go:629] Waited for 202.9694ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:19.407628    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:19.407628    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.407628    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.407628    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.413344    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:19.416343    4212 pod_ready.go:92] pod "kube-proxy-mqpzs" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.416343    4212 pod_ready.go:81] duration metric: took 397.4297ms for pod "kube-proxy-mqpzs" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.416642    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.599114    4212 request.go:629] Waited for 182.4278ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:21:19.599114    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500
	I0604 22:21:19.599361    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.599431    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.599473    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.605255    4212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 22:21:19.796830    4212 request.go:629] Waited for 190.8505ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:19.797099    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500
	I0604 22:21:19.797099    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:19.797099    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:19.797099    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:19.797589    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:19.802572    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:19.802572    4212 pod_ready.go:81] duration metric: took 385.9273ms for pod "kube-scheduler-ha-609500" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:19.802572    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.009884    4212 request.go:629] Waited for 206.4162ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:21:20.009884    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m02
	I0604 22:21:20.010024    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.010024    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.010024    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.020874    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:21:20.202223    4212 request.go:629] Waited for 179.1886ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:20.202505    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m02
	I0604 22:21:20.202580    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.202580    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.202580    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.214692    4212 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0604 22:21:20.215413    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:20.215459    4212 pod_ready.go:81] duration metric: took 412.316ms for pod "kube-scheduler-ha-609500-m02" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.215540    4212 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.401431    4212 request.go:629] Waited for 185.8895ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m03
	I0604 22:21:20.401668    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-609500-m03
	I0604 22:21:20.401668    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.401668    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.401668    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.405898    4212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 22:21:20.596395    4212 request.go:629] Waited for 186.4102ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:20.596395    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes/ha-609500-m03
	I0604 22:21:20.596395    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.596395    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.596395    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.597016    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:20.602923    4212 pod_ready.go:92] pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace has status "Ready":"True"
	I0604 22:21:20.602923    4212 pod_ready.go:81] duration metric: took 387.3796ms for pod "kube-scheduler-ha-609500-m03" in "kube-system" namespace to be "Ready" ...
	I0604 22:21:20.602923    4212 pod_ready.go:38] duration metric: took 8.8044963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 22:21:20.602923    4212 api_server.go:52] waiting for apiserver process to appear ...
	I0604 22:21:20.616235    4212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 22:21:20.643845    4212 api_server.go:72] duration metric: took 15.3791645s to wait for apiserver process to appear ...
	I0604 22:21:20.643920    4212 api_server.go:88] waiting for apiserver healthz status ...
	I0604 22:21:20.643977    4212 api_server.go:253] Checking apiserver healthz at https://172.20.131.101:8443/healthz ...
	I0604 22:21:20.655248    4212 api_server.go:279] https://172.20.131.101:8443/healthz returned 200:
	ok
	I0604 22:21:20.655367    4212 round_trippers.go:463] GET https://172.20.131.101:8443/version
	I0604 22:21:20.655367    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.655367    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.655367    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.657003    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:20.657063    4212 api_server.go:141] control plane version: v1.30.1
	I0604 22:21:20.657157    4212 api_server.go:131] duration metric: took 13.2365ms to wait for apiserver health ...
	I0604 22:21:20.657157    4212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 22:21:20.795062    4212 request.go:629] Waited for 137.487ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:20.795062    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:20.795062    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:20.795062    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:20.795062    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:20.809292    4212 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 22:21:20.819695    4212 system_pods.go:59] 24 kube-system pods found
	I0604 22:21:20.819695    4212 system_pods.go:61] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:21:20.819695    4212 system_pods.go:61] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "etcd-ha-609500-m03" [2a048691-b672-40ce-a5de-bddb99ba0246] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-bpml8" [c8881f19-8b7c-4de7-90e6-0b77affa003b] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-apiserver-ha-609500-m03" [c56ed0b7-dce0-4628-886c-7b078c99aa57] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-controller-manager-ha-609500-m03" [99f40329-5004-4302-b9e3-71b3c33323e4] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-proxy-mqpzs" [38dd642e-4689-4125-8cfe-48f08039d3d7] Running
	I0604 22:21:20.820279    4212 system_pods.go:61] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-scheduler-ha-609500-m03" [026ddba3-e162-44e7-8ceb-1cc50ad79708] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "kube-vip-ha-609500-m03" [f5e7a6dc-d055-425a-bd95-1e7da9341c97] Running
	I0604 22:21:20.820572    4212 system_pods.go:61] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:21:20.820705    4212 system_pods.go:74] duration metric: took 163.3643ms to wait for pod list to return data ...
	I0604 22:21:20.820705    4212 default_sa.go:34] waiting for default service account to be created ...
	I0604 22:21:21.007522    4212 request.go:629] Waited for 186.8159ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:21:21.007756    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/default/serviceaccounts
	I0604 22:21:21.007756    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.007756    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.007864    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.014716    4212 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 22:21:21.014716    4212 default_sa.go:45] found service account: "default"
	I0604 22:21:21.014716    4212 default_sa.go:55] duration metric: took 194.0096ms for default service account to be created ...
	I0604 22:21:21.014716    4212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 22:21:21.206499    4212 request.go:629] Waited for 191.5686ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:21.206665    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/namespaces/kube-system/pods
	I0604 22:21:21.206665    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.206665    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.206665    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.218328    4212 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 22:21:21.230088    4212 system_pods.go:86] 24 kube-system pods found
	I0604 22:21:21.230088    4212 system_pods.go:89] "coredns-7db6d8ff4d-r68pn" [4f018ef8-6a1c-4e18-9f46-2341dca31903] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "coredns-7db6d8ff4d-zlxf9" [71fcfc44-30ee-4092-9ff7-af29b0ad0012] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500" [94e7aa9b-cfb1-4910-b464-347d8a5506bc] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500-m02" [2db71342-8a43-42fd-a415-7f05c00163f6] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "etcd-ha-609500-m03" [2a048691-b672-40ce-a5de-bddb99ba0246] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-7plk9" [59617539-bb65-430a-a2a6-9b29fe07b8e0] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-bpml8" [c8881f19-8b7c-4de7-90e6-0b77affa003b] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kindnet-phj2j" [56d23c07-ebe0-4876-9a2b-e170cbdf2ce2] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500" [048ab298-bd5e-4e53-bfd5-315b7b0349aa] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500-m02" [72263744-42da-4c56-bad3-7099b69eb3e7] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-apiserver-ha-609500-m03" [c56ed0b7-dce0-4628-886c-7b078c99aa57] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500" [6641ef19-a87e-425d-b698-04ac420f56f0] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m02" [8e6b0735-115c-456a-b99b-9c55270b1cb2] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-controller-manager-ha-609500-m03" [99f40329-5004-4302-b9e3-71b3c33323e4] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-4ppxq" [b0b0ad53-65c5-450e-981e-2034d197fc82] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-fnjrb" [274d8218-2645-4664-a7fa-3303767b4f87] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-proxy-mqpzs" [38dd642e-4689-4125-8cfe-48f08039d3d7] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500" [64451eb3-387e-41ad-be19-ba5b3c45f5a8] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500-m02" [b33a6f6a-2681-4248-b0dc-2a1d72041a48] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-scheduler-ha-609500-m03" [026ddba3-e162-44e7-8ceb-1cc50ad79708] Running
	I0604 22:21:21.230088    4212 system_pods.go:89] "kube-vip-ha-609500" [85ca2aa5-05d8-4f1b-80c8-7511304cc2bb] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "kube-vip-ha-609500-m02" [143e42dd-8e55-449a-921a-d67c132096e6] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "kube-vip-ha-609500-m03" [f5e7a6dc-d055-425a-bd95-1e7da9341c97] Running
	I0604 22:21:21.234361    4212 system_pods.go:89] "storage-provisioner" [c7f1304c-577a-4baf-84d0-51c6006a05f0] Running
	I0604 22:21:21.234361    4212 system_pods.go:126] duration metric: took 219.6437ms to wait for k8s-apps to be running ...
	I0604 22:21:21.234488    4212 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 22:21:21.249488    4212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 22:21:21.283205    4212 system_svc.go:56] duration metric: took 48.7166ms WaitForService to wait for kubelet
	I0604 22:21:21.283205    4212 kubeadm.go:576] duration metric: took 16.018519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 22:21:21.283205    4212 node_conditions.go:102] verifying NodePressure condition ...
	I0604 22:21:21.398025    4212 request.go:629] Waited for 114.6923ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.131.101:8443/api/v1/nodes
	I0604 22:21:21.398110    4212 round_trippers.go:463] GET https://172.20.131.101:8443/api/v1/nodes
	I0604 22:21:21.398110    4212 round_trippers.go:469] Request Headers:
	I0604 22:21:21.398110    4212 round_trippers.go:473]     Accept: application/json, */*
	I0604 22:21:21.398110    4212 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 22:21:21.398843    4212 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 22:21:21.404671    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 22:21:21.404797    4212 node_conditions.go:123] node cpu capacity is 2
	I0604 22:21:21.404797    4212 node_conditions.go:105] duration metric: took 121.5919ms to run NodePressure ...
	I0604 22:21:21.404797    4212 start.go:240] waiting for startup goroutines ...
	I0604 22:21:21.404870    4212 start.go:254] writing updated cluster config ...
	I0604 22:21:21.417929    4212 ssh_runner.go:195] Run: rm -f paused
	I0604 22:21:21.581491    4212 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 22:21:21.588650    4212 out.go:177] * Done! kubectl is now configured to use "ha-609500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.964583607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.965539709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.965999010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995534182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995726482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995768782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:12:59 ha-609500 dockerd[1332]: time="2024-06-04T22:12:59.995880082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702018077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702113078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702133278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 dockerd[1332]: time="2024-06-04T22:22:03.702297379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:03 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:22:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd105bf931810d5094fe400f19d7941e4062c0e8296b59dc0adb294e6d176eca/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 04 22:22:05 ha-609500 cri-dockerd[1231]: time="2024-06-04T22:22:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620169490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620315691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620370392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:22:05 ha-609500 dockerd[1332]: time="2024-06-04T22:22:05.620498092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 22:23:11 ha-609500 dockerd[1326]: 2024/06/04 22:23:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:11 ha-609500 dockerd[1326]: 2024/06/04 22:23:11 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 22:23:12 ha-609500 dockerd[1326]: 2024/06/04 22:23:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43eb245091a16       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   cd105bf931810       busybox-fc5497c4f-m2dsk
	331200672b900       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   07fc3250e17fc       coredns-7db6d8ff4d-zlxf9
	354d29cc4ee64       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   55963b43f59f0       coredns-7db6d8ff4d-r68pn
	b2e01578bf279       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   f868c2e89359e       storage-provisioner
	eab704b102c1e       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Running             kindnet-cni               0                   eb5fec61a1850       kindnet-phj2j
	27ad26efaa029       747097150317f                                                                                         26 minutes ago      Running             kube-proxy                0                   04f2353b96c8e       kube-proxy-4ppxq
	fc670e59a57fc       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   8858e6f093ca0       kube-vip-ha-609500
	150d0f1df1f9b       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   f661270f19b99       etcd-ha-609500
	e9cca2562827d       a52dc94f0a912                                                                                         27 minutes ago      Running             kube-scheduler            0                   a2207aa685938       kube-scheduler-ha-609500
	ca3f58b82ea71       25a1387cdab82                                                                                         27 minutes ago      Running             kube-controller-manager   0                   8e72949429c8a       kube-controller-manager-ha-609500
	469104c1a293e       91be940803172                                                                                         27 minutes ago      Running             kube-apiserver            0                   f15ef59ba79a5       kube-apiserver-ha-609500
	
	
	==> coredns [331200672b90] <==
	[INFO] 10.244.0.4:36499 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.05361587s
	[INFO] 10.244.1.2:37918 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000167002s
	[INFO] 10.244.1.2:38784 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000781s
	[INFO] 10.244.1.2:41814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.039238244s
	[INFO] 10.244.2.2:39799 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.128748783s
	[INFO] 10.244.2.2:33241 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000271902s
	[INFO] 10.244.2.2:43971 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184102s
	[INFO] 10.244.2.2:41575 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131401s
	[INFO] 10.244.0.4:43592 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164401s
	[INFO] 10.244.0.4:52666 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000249401s
	[INFO] 10.244.0.4:59874 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075101s
	[INFO] 10.244.1.2:58128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218902s
	[INFO] 10.244.1.2:52271 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000733s
	[INFO] 10.244.1.2:39420 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000298302s
	[INFO] 10.244.1.2:44136 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132s
	[INFO] 10.244.1.2:57848 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000594s
	[INFO] 10.244.2.2:57496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001109s
	[INFO] 10.244.0.4:35893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161801s
	[INFO] 10.244.0.4:33714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137701s
	[INFO] 10.244.2.2:50485 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126001s
	[INFO] 10.244.0.4:43903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335002s
	[INFO] 10.244.0.4:59114 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119901s
	[INFO] 10.244.1.2:47087 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000347203s
	[INFO] 10.244.1.2:41196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082001s
	[INFO] 10.244.1.2:38000 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000596s
	
	
	==> coredns [354d29cc4ee6] <==
	[INFO] 10.244.2.2:37187 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012340285s
	[INFO] 10.244.2.2:34193 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001227s
	[INFO] 10.244.0.4:38135 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014181198s
	[INFO] 10.244.0.4:36529 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136301s
	[INFO] 10.244.0.4:46892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000100601s
	[INFO] 10.244.0.4:44799 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000181301s
	[INFO] 10.244.0.4:50435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001225s
	[INFO] 10.244.1.2:49492 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000072701s
	[INFO] 10.244.1.2:38408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000483004s
	[INFO] 10.244.1.2:35903 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000623s
	[INFO] 10.244.2.2:54488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000793s
	[INFO] 10.244.2.2:33208 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096301s
	[INFO] 10.244.2.2:47293 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000616s
	[INFO] 10.244.0.4:44019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068s
	[INFO] 10.244.0.4:47749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000636304s
	[INFO] 10.244.1.2:45546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119s
	[INFO] 10.244.1.2:60098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079s
	[INFO] 10.244.1.2:59963 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058901s
	[INFO] 10.244.1.2:59268 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059s
	[INFO] 10.244.2.2:49237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000277802s
	[INFO] 10.244.2.2:54226 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119301s
	[INFO] 10.244.2.2:38788 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092001s
	[INFO] 10.244.0.4:36682 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001257s
	[INFO] 10.244.0.4:50471 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159701s
	[INFO] 10.244.1.2:40217 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139301s
	
	
	==> describe nodes <==
	Name:               ha-609500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T22_12_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:12:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:37:50 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:37:50 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:37:50 +0000   Tue, 04 Jun 2024 22:12:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:37:50 +0000   Tue, 04 Jun 2024 22:12:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.131.101
	  Hostname:    ha-609500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5174b038796a4663b8fdcff3502fbd2e
	  System UUID:                4fe51a0c-e109-9f4f-897a-12b5e0a75135
	  Boot ID:                    44531ec2-8568-49af-b4f3-f119c23323a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m2dsk              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-r68pn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-zlxf9             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-609500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-phj2j                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-609500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-609500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-4ppxq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-609500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-609500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-609500 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-609500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-609500 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-609500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-609500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-609500 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-609500 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-609500 event: Registered Node ha-609500 in Controller
	
	
	Name:               ha-609500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T22_16_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:16:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:39:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:37:47 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:37:47 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:37:47 +0000   Tue, 04 Jun 2024 22:16:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:37:47 +0000   Tue, 04 Jun 2024 22:17:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.128.86
	  Hostname:    ha-609500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ae6337051f143178855fd9d2477b35d
	  System UUID:                16a9419c-11c4-e04e-8422-6e5fd7629acf
	  Boot ID:                    e0f80e3c-2f9d-4d80-a96e-da68cc478a81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qm589                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-609500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-7plk9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-609500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-609500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-fnjrb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-609500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-609500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node ha-609500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node ha-609500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node ha-609500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-609500-m02 event: Registered Node ha-609500-m02 in Controller
	
	
	Name:               ha-609500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T22_21_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:20:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:39:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:37:49 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:37:49 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:37:49 +0000   Tue, 04 Jun 2024 22:20:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:37:49 +0000   Tue, 04 Jun 2024 22:21:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.138.190
	  Hostname:    ha-609500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 abe5455664c245a586d7fcd510ba03a9
	  System UUID:                a8a48a27-0986-8744-b933-d146cf528029
	  Boot ID:                    2bec906f-0529-42a4-a365-7362054d68ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gbl9h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-609500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-bpml8                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-609500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-609500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-mqpzs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-609500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-609500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-609500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-609500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-609500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-609500-m03 event: Registered Node ha-609500-m03 in Controller
	
	
	Name:               ha-609500-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-609500-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=ha-609500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T22_26_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 22:26:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-609500-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 22:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 22:37:31 +0000   Tue, 04 Jun 2024 22:26:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 22:37:31 +0000   Tue, 04 Jun 2024 22:26:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 22:37:31 +0000   Tue, 04 Jun 2024 22:26:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 22:37:31 +0000   Tue, 04 Jun 2024 22:27:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.139.18
	  Hostname:    ha-609500-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 83009d8dfc6e4233b4560e5187aff18d
	  System UUID:                4352a523-7e4d-2e42-ac6f-933af32af427
	  Boot ID:                    029e0b5d-c291-4059-9088-61bda3ce9912
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7ljf5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-hf74k    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node ha-609500-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node ha-609500-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-609500-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-609500-m04 event: Registered Node ha-609500-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-609500-m04 event: Registered Node ha-609500-m04 in Controller
	  Normal  RegisteredNode           12m                node-controller  Node ha-609500-m04 event: Registered Node ha-609500-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-609500-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.378679] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jun 4 22:11] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.201942] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[ +32.976318] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.103649] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.603333] systemd-fstab-generator[987]: Ignoring "noauto" option for root device
	[  +0.213394] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.252332] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +2.866463] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.198410] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.199632] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.292667] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[Jun 4 22:12] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.110759] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.497531] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +7.392152] systemd-fstab-generator[1730]: Ignoring "noauto" option for root device
	[  +0.113219] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.355782] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.214130] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[ +14.770971] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.270010] kauditd_printk_skb: 29 callbacks suppressed
	[Jun 4 22:17] kauditd_printk_skb: 26 callbacks suppressed
	[Jun 4 22:22] hrtimer: interrupt took 6651046 ns
	
	
	==> etcd [150d0f1df1f9] <==
	{"level":"warn","ts":"2024-06-04T22:26:57.957479Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b411a4cd63654ea3","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"150.554305ms"}
	{"level":"warn","ts":"2024-06-04T22:26:57.958179Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"8cfef8e34c568672","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"150.717306ms"}
	{"level":"info","ts":"2024-06-04T22:26:57.958484Z","caller":"traceutil/trace.go:171","msg":"trace[1390245817] transaction","detail":"{read_only:false; response_revision:2773; number_of_response:1; }","duration":"339.958548ms","start":"2024-06-04T22:26:57.618507Z","end":"2024-06-04T22:26:57.958465Z","steps":["trace[1390245817] 'process raft request'  (duration: 339.808347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T22:26:57.958974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T22:26:57.618493Z","time spent":"340.045648ms","remote":"127.0.0.1:43296","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" mod_revision:2743 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" value_size:4632 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" > >"}
	{"level":"warn","ts":"2024-06-04T22:26:58.114217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.978246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-609500-m04\" ","response":"range_response_count:1 size:2812"}
	{"level":"info","ts":"2024-06-04T22:26:58.11436Z","caller":"traceutil/trace.go:171","msg":"trace[244752915] range","detail":"{range_begin:/registry/minions/ha-609500-m04; range_end:; response_count:1; response_revision:2774; }","duration":"119.101047ms","start":"2024-06-04T22:26:57.995187Z","end":"2024-06-04T22:26:58.114288Z","steps":["trace[244752915] 'agreement among raft nodes before linearized reading'  (duration: 84.62416ms)","trace[244752915] 'range keys from in-memory index tree'  (duration: 34.258886ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T22:26:58.115072Z","caller":"traceutil/trace.go:171","msg":"trace[1845445164] transaction","detail":"{read_only:false; response_revision:2775; number_of_response:1; }","duration":"107.101482ms","start":"2024-06-04T22:26:58.007958Z","end":"2024-06-04T22:26:58.115059Z","steps":["trace[1845445164] 'process raft request'  (duration: 77.009119ms)","trace[1845445164] 'compare'  (duration: 29.091158ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T22:26:58.359618Z","caller":"traceutil/trace.go:171","msg":"trace[660790183] transaction","detail":"{read_only:false; response_revision:2776; number_of_response:1; }","duration":"210.698644ms","start":"2024-06-04T22:26:58.148902Z","end":"2024-06-04T22:26:58.3596Z","steps":["trace[660790183] 'process raft request'  (duration: 164.00999ms)","trace[660790183] 'compare'  (duration: 46.353452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T22:26:58.360351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.402116ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-04T22:26:58.360446Z","caller":"traceutil/trace.go:171","msg":"trace[848983276] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2775; }","duration":"114.920524ms","start":"2024-06-04T22:26:58.24548Z","end":"2024-06-04T22:26:58.360401Z","steps":["trace[848983276] 'agreement among raft nodes before linearized reading'  (duration: 68.586673ms)","trace[848983276] 'range keys from in-memory index tree'  (duration: 44.825043ms)"],"step_count":2}
	2024/06/04 22:26:59 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-04T22:27:00.645067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.703232ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1868307635472615024 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" mod_revision:2787 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" value_size:4785 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-ksg2j\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-04T22:27:00.645616Z","caller":"traceutil/trace.go:171","msg":"trace[927116508] transaction","detail":"{read_only:false; response_revision:2822; number_of_response:1; }","duration":"139.979259ms","start":"2024-06-04T22:27:00.505618Z","end":"2024-06-04T22:27:00.645597Z","steps":["trace[927116508] 'process raft request'  (duration: 22.679323ms)","trace[927116508] 'compare'  (duration: 116.622832ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T22:27:00.645741Z","caller":"traceutil/trace.go:171","msg":"trace[243039747] transaction","detail":"{read_only:false; response_revision:2823; number_of_response:1; }","duration":"137.014643ms","start":"2024-06-04T22:27:00.508714Z","end":"2024-06-04T22:27:00.645729Z","steps":["trace[243039747] 'process raft request'  (duration: 136.47004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T22:27:04.746998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.901122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-609500-m04\" ","response":"range_response_count:1 size:3113"}
	{"level":"info","ts":"2024-06-04T22:27:04.747685Z","caller":"traceutil/trace.go:171","msg":"trace[1930965950] range","detail":"{range_begin:/registry/minions/ha-609500-m04; range_end:; response_count:1; response_revision:2844; }","duration":"263.526425ms","start":"2024-06-04T22:27:04.484046Z","end":"2024-06-04T22:27:04.747573Z","steps":["trace[1930965950] 'range keys from in-memory index tree'  (duration: 261.017411ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T22:27:24.25101Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1988}
	{"level":"info","ts":"2024-06-04T22:27:24.317736Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1988,"took":"65.773851ms","hash":3294975384,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2789376,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-06-04T22:27:24.317936Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3294975384,"revision":1988,"compact-revision":1086}
	{"level":"info","ts":"2024-06-04T22:32:24.287303Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2902}
	{"level":"info","ts":"2024-06-04T22:32:24.350699Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2902,"took":"61.78368ms","hash":2449274305,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2674688,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-06-04T22:32:24.350758Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2449274305,"revision":2902,"compact-revision":1988}
	{"level":"info","ts":"2024-06-04T22:37:24.331809Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3644}
	{"level":"info","ts":"2024-06-04T22:37:24.406275Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3644,"took":"73.780601ms","hash":1504880957,"current-db-size-bytes":3637248,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2093056,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-04T22:37:24.40643Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1504880957,"revision":3644,"compact-revision":2902}
	
	
	==> kernel <==
	 22:39:36 up 29 min,  0 users,  load average: 0.62, 0.83, 0.65
	Linux ha-609500 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eab704b102c1] <==
	I0604 22:39:06.146111       1 main.go:250] Node ha-609500-m04 has CIDR [10.244.3.0/24] 
	I0604 22:39:16.164035       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:39:16.164091       1 main.go:227] handling current node
	I0604 22:39:16.164137       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:39:16.164147       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:39:16.164779       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:39:16.164798       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:39:16.165010       1 main.go:223] Handling node with IPs: map[172.20.139.18:{}]
	I0604 22:39:16.165125       1 main.go:250] Node ha-609500-m04 has CIDR [10.244.3.0/24] 
	I0604 22:39:26.175249       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:39:26.175600       1 main.go:227] handling current node
	I0604 22:39:26.175741       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:39:26.175780       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:39:26.175985       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:39:26.175996       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:39:26.176082       1 main.go:223] Handling node with IPs: map[172.20.139.18:{}]
	I0604 22:39:26.176193       1 main.go:250] Node ha-609500-m04 has CIDR [10.244.3.0/24] 
	I0604 22:39:36.199318       1 main.go:223] Handling node with IPs: map[172.20.131.101:{}]
	I0604 22:39:36.208712       1 main.go:227] handling current node
	I0604 22:39:36.208864       1 main.go:223] Handling node with IPs: map[172.20.128.86:{}]
	I0604 22:39:36.209012       1 main.go:250] Node ha-609500-m02 has CIDR [10.244.1.0/24] 
	I0604 22:39:36.210339       1 main.go:223] Handling node with IPs: map[172.20.138.190:{}]
	I0604 22:39:36.212725       1 main.go:250] Node ha-609500-m03 has CIDR [10.244.2.0/24] 
	I0604 22:39:36.212965       1 main.go:223] Handling node with IPs: map[172.20.139.18:{}]
	I0604 22:39:36.213094       1 main.go:250] Node ha-609500-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [469104c1a293] <==
	E0604 22:20:56.629608       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0604 22:20:56.629961       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0604 22:20:56.630106       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 6.1µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0604 22:20:56.631541       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0604 22:20:56.631760       1 timeout.go:142] post-timeout activity - time-elapsed: 2.257906ms, PATCH "/api/v1/namespaces/default/events/ha-609500-m03.17d5ed378a03c55f" result: <nil>
	E0604 22:22:09.229955       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63672: use of closed network connection
	E0604 22:22:09.858879       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63675: use of closed network connection
	E0604 22:22:11.498090       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63677: use of closed network connection
	E0604 22:22:12.186172       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63679: use of closed network connection
	E0604 22:22:12.757480       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63681: use of closed network connection
	E0604 22:22:13.343851       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63683: use of closed network connection
	E0604 22:22:13.899248       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63685: use of closed network connection
	E0604 22:22:14.477844       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63687: use of closed network connection
	E0604 22:22:15.044794       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63689: use of closed network connection
	E0604 22:22:16.052087       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63693: use of closed network connection
	E0604 22:22:26.609233       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63695: use of closed network connection
	E0604 22:22:27.190517       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63698: use of closed network connection
	E0604 22:22:37.736731       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63700: use of closed network connection
	E0604 22:22:38.272474       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63702: use of closed network connection
	E0604 22:22:48.830886       1 conn.go:339] Error on socket receive: read tcp 172.20.143.254:8443->172.20.128.1:63704: use of closed network connection
	E0604 22:26:59.177698       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0604 22:26:59.177808       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0604 22:26:59.179112       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0604 22:26:59.179162       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0604 22:26:59.180564       1 timeout.go:142] post-timeout activity - time-elapsed: 3.511619ms, GET "/api/v1/nodes/ha-609500-m04" result: <nil>
	
	
	==> kube-controller-manager [ca3f58b82ea7] <==
	I0604 22:16:51.873894       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-609500-m02" podCIDRs=["10.244.1.0/24"]
	I0604 22:16:54.459192       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-609500-m02"
	I0604 22:20:55.787683       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-609500-m03\" does not exist"
	I0604 22:20:55.814501       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-609500-m03" podCIDRs=["10.244.2.0/24"]
	I0604 22:20:59.547768       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-609500-m03"
	I0604 22:22:02.524560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="191.113809ms"
	I0604 22:22:02.593745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.038946ms"
	I0604 22:22:02.899859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="305.981817ms"
	I0604 22:22:03.112227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.934456ms"
	I0604 22:22:03.256053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="143.741833ms"
	I0604 22:22:03.475243       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="219.132527ms"
	I0604 22:22:03.476334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.024908ms"
	I0604 22:22:03.634968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="158.496049ms"
	I0604 22:22:03.635552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="395.703µs"
	I0604 22:22:05.797008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.4µs"
	I0604 22:22:06.087307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.843569ms"
	I0604 22:22:06.124510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.058055ms"
	I0604 22:22:06.188052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.850827ms"
	I0604 22:22:06.188502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.4µs"
	E0604 22:26:47.575938       1 certificate_controller.go:146] Sync csr-b5dmf failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-b5dmf": the object has been modified; please apply your changes to the latest version and try again
	E0604 22:26:47.612556       1 certificate_controller.go:146] Sync csr-b5dmf failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-b5dmf": the object has been modified; please apply your changes to the latest version and try again
	I0604 22:26:47.678370       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-609500-m04\" does not exist"
	I0604 22:26:47.707004       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-609500-m04" podCIDRs=["10.244.3.0/24"]
	I0604 22:26:49.656888       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-609500-m04"
	I0604 22:27:10.019408       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-609500-m04"
	
	
	==> kube-proxy [27ad26efaa02] <==
	I0604 22:12:45.918221       1 server_linux.go:69] "Using iptables proxy"
	I0604 22:12:45.940330       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.131.101"]
	I0604 22:12:46.015378       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 22:12:46.015511       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 22:12:46.015540       1 server_linux.go:165] "Using iptables Proxier"
	I0604 22:12:46.020469       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 22:12:46.021345       1 server.go:872] "Version info" version="v1.30.1"
	I0604 22:12:46.021557       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 22:12:46.024931       1 config.go:192] "Starting service config controller"
	I0604 22:12:46.026864       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 22:12:46.026345       1 config.go:101] "Starting endpoint slice config controller"
	I0604 22:12:46.027389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 22:12:46.025989       1 config.go:319] "Starting node config controller"
	I0604 22:12:46.027755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 22:12:46.127564       1 shared_informer.go:320] Caches are synced for service config
	I0604 22:12:46.128069       1 shared_informer.go:320] Caches are synced for node config
	I0604 22:12:46.128204       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e9cca2562827] <==
	W0604 22:12:28.797686       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 22:12:28.797851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 22:12:28.870024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0604 22:12:28.870063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0604 22:12:28.930507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 22:12:28.930710       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 22:12:28.937270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0604 22:12:28.937325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0604 22:12:28.964585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0604 22:12:28.967043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0604 22:12:29.012723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0604 22:12:29.013016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0604 22:12:31.125503       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0604 22:20:56.135803       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ttthj\": pod kindnet-ttthj is already assigned to node \"ha-609500-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ttthj" node="ha-609500-m03"
	E0604 22:20:56.136001       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 45a0a448-dfbd-46dd-8c14-eae75989a0a2(kube-system/kindnet-ttthj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ttthj"
	E0604 22:20:56.136619       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ttthj\": pod kindnet-ttthj is already assigned to node \"ha-609500-m03\"" pod="kube-system/kindnet-ttthj"
	I0604 22:20:56.136793       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ttthj" node="ha-609500-m03"
	E0604 22:22:02.468216       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qm589\": pod busybox-fc5497c4f-qm589 is already assigned to node \"ha-609500-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qm589" node="ha-609500-m02"
	E0604 22:22:02.469195       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7da8d303-4706-4bb8-8a78-ac1973051987(default/busybox-fc5497c4f-qm589) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-qm589"
	E0604 22:22:02.469345       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qm589\": pod busybox-fc5497c4f-qm589 is already assigned to node \"ha-609500-m02\"" pod="default/busybox-fc5497c4f-qm589"
	I0604 22:22:02.469438       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qm589" node="ha-609500-m02"
	E0604 22:26:47.806936       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7ljf5\": pod kindnet-7ljf5 is already assigned to node \"ha-609500-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7ljf5" node="ha-609500-m04"
	E0604 22:26:47.807589       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 13c664cb-ad46-4065-8416-508b38dd08e0(kube-system/kindnet-7ljf5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7ljf5"
	E0604 22:26:47.807747       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7ljf5\": pod kindnet-7ljf5 is already assigned to node \"ha-609500-m04\"" pod="kube-system/kindnet-7ljf5"
	I0604 22:26:47.807961       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7ljf5" node="ha-609500-m04"
	
	
	==> kubelet <==
	Jun 04 22:35:31 ha-609500 kubelet[2220]: E0604 22:35:31.213723    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:35:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:35:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:35:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:35:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:36:31 ha-609500 kubelet[2220]: E0604 22:36:31.212819    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:36:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:36:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:36:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:36:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:37:31 ha-609500 kubelet[2220]: E0604 22:37:31.212102    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:37:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:37:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:37:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:37:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:38:31 ha-609500 kubelet[2220]: E0604 22:38:31.212270    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:38:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:38:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:38:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:38:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 22:39:31 ha-609500 kubelet[2220]: E0604 22:39:31.222977    2220 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 22:39:31 ha-609500 kubelet[2220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 22:39:31 ha-609500 kubelet[2220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 22:39:31 ha-609500 kubelet[2220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 22:39:31 ha-609500 kubelet[2220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:39:27.126025   11124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-609500 -n ha-609500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-609500 -n ha-609500: (13.4788822s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-609500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (669.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (60.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- sh -c "ping -c 1 172.20.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- sh -c "ping -c 1 172.20.128.1": exit status 1 (10.563253s)

                                                
                                                
-- stdout --
	PING 172.20.128.1 (172.20.128.1): 56 data bytes
	
	--- 172.20.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:19:35.356314   12536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.128.1) from pod (busybox-fc5497c4f-8bcjx): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- sh -c "ping -c 1 172.20.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- sh -c "ping -c 1 172.20.128.1": exit status 1 (10.5195342s)

                                                
                                                
-- stdout --
	PING 172.20.128.1 (172.20.128.1): 56 data bytes
	
	--- 172.20.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:19:46.459419    4860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.128.1) from pod (busybox-fc5497c4f-cbgjv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000: (13.4007349s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 logs -n 25: (9.3798204s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-821000 ssh -- ls                    | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:07 UTC | 04 Jun 24 23:07 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-821000                           | mount-start-1-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:07 UTC | 04 Jun 24 23:08 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-821000 ssh -- ls                    | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:08 UTC | 04 Jun 24 23:08 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-821000                           | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:08 UTC | 04 Jun 24 23:08 UTC |
	| start   | -p mount-start-2-821000                           | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:08 UTC | 04 Jun 24 23:11 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:11 UTC |                     |
	|         | --profile mount-start-2-821000 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-821000 ssh -- ls                    | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:11 UTC | 04 Jun 24 23:11 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-821000                           | mount-start-2-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:11 UTC | 04 Jun 24 23:11 UTC |
	| delete  | -p mount-start-1-821000                           | mount-start-1-821000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:11 UTC | 04 Jun 24 23:11 UTC |
	| start   | -p multinode-022000                               | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:11 UTC | 04 Jun 24 23:18 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- apply -f                   | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- rollout                    | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- get pods -o                | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- get pods -o                | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-8bcjx --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-cbgjv --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-8bcjx --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-cbgjv --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-8bcjx -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-cbgjv -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- get pods -o                | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-8bcjx                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC |                     |
	|         | busybox-fc5497c4f-8bcjx -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.128.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC | 04 Jun 24 23:19 UTC |
	|         | busybox-fc5497c4f-cbgjv                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-022000 -- exec                       | multinode-022000     | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:19 UTC |                     |
	|         | busybox-fc5497c4f-cbgjv -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.128.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 23:11:50
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 23:11:50.938566    6196 out.go:291] Setting OutFile to fd 1188 ...
	I0604 23:11:50.940378    6196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:11:50.940378    6196 out.go:304] Setting ErrFile to fd 884...
	I0604 23:11:50.940378    6196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:11:50.971352    6196 out.go:298] Setting JSON to false
	I0604 23:11:50.971903    6196 start.go:129] hostinfo: {"hostname":"minikube6","uptime":89960,"bootTime":1717452750,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 23:11:50.971903    6196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 23:11:50.982016    6196 out.go:177] * [multinode-022000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 23:11:50.986541    6196 notify.go:220] Checking for updates...
	I0604 23:11:50.988842    6196 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:11:50.991641    6196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 23:11:50.995727    6196 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 23:11:50.998398    6196 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 23:11:51.001798    6196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 23:11:51.007108    6196 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:11:51.007668    6196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 23:11:56.752972    6196 out.go:177] * Using the hyperv driver based on user configuration
	I0604 23:11:56.756869    6196 start.go:297] selected driver: hyperv
	I0604 23:11:56.756869    6196 start.go:901] validating driver "hyperv" against <nil>
	I0604 23:11:56.759166    6196 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 23:11:56.813162    6196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 23:11:56.814562    6196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:11:56.814624    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:11:56.814624    6196 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0604 23:11:56.814624    6196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 23:11:56.814624    6196 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:11:56.814624    6196 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 23:11:56.819653    6196 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0604 23:11:56.821799    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:11:56.821799    6196 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 23:11:56.821799    6196 cache.go:56] Caching tarball of preloaded images
	I0604 23:11:56.821799    6196 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:11:56.821799    6196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:11:56.823424    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:11:56.823619    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json: {Name:mk5c99c0f75f9c570ef890f215c48836e63daea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:11:56.823923    6196 start.go:360] acquireMachinesLock for multinode-022000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:11:56.825103    6196 start.go:364] duration metric: took 113.3µs to acquireMachinesLock for "multinode-022000"
	I0604 23:11:56.825311    6196 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 23:11:56.825311    6196 start.go:125] createHost starting for "" (driver="hyperv")
	I0604 23:11:56.828092    6196 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 23:11:56.829789    6196 start.go:159] libmachine.API.Create for "multinode-022000" (driver="hyperv")
	I0604 23:11:56.829789    6196 client.go:168] LocalClient.Create starting
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:11:56.830905    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 23:11:56.831179    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:11:56.831179    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:11:56.831179    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 23:11:58.995896    6196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 23:11:58.995896    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:11:59.005843    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 23:12:00.929391    6196 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 23:12:00.938397    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:00.938397    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:12:02.556708    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:12:02.556708    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:02.565436    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:12:06.485499    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:12:06.485499    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:06.488564    6196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 23:12:07.034231    6196 main.go:141] libmachine: Creating SSH key...
	I0604 23:12:07.365189    6196 main.go:141] libmachine: Creating VM...
	I0604 23:12:07.365189    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:12:10.523521    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:12:10.523521    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:10.523521    6196 main.go:141] libmachine: Using switch "Default Switch"
	I0604 23:12:10.523893    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:12:12.375736    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:12:12.384112    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:12.384112    6196 main.go:141] libmachine: Creating VHD
	I0604 23:12:12.384112    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 23:12:16.381182    6196 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BCA69AF8-6241-4CFF-9F74-B5CC0E3602EB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 23:12:16.381182    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:16.381182    6196 main.go:141] libmachine: Writing magic tar header
	I0604 23:12:16.381182    6196 main.go:141] libmachine: Writing SSH key tar header
	I0604 23:12:16.390861    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 23:12:19.685450    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:19.697481    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:19.697732    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd' -SizeBytes 20000MB
	I0604 23:12:22.393203    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:22.393203    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:22.407069    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 23:12:26.455688    6196 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-022000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 23:12:26.455783    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:26.455783    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-022000 -DynamicMemoryEnabled $false
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-022000 -Count 2
	I0604 23:12:31.327868    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:31.341976    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:31.342233    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\boot2docker.iso'
	I0604 23:12:34.140105    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:34.140105    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:34.143563    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd'
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:36.988077    6196 main.go:141] libmachine: Starting VM...
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000
	I0604 23:12:40.284458    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:40.297074    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:40.297074    6196 main.go:141] libmachine: Waiting for host to start...
	I0604 23:12:40.297074    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:45.495421    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:45.495421    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:46.508549    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:48.901871    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:48.902033    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:48.902033    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:51.611701    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:51.611781    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:52.620226    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:55.036633    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:55.036633    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:55.036810    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:57.746024    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:57.752732    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:58.755831    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:01.192377    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:01.199150    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:01.199274    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:03.976813    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:13:03.976813    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:04.993010    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:07.426655    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:07.426655    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:07.426863    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:10.198860    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:10.211321    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:10.211321    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:12.506051    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:12.506112    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:12.506112    6196 machine.go:94] provisionDockerMachine start ...
	I0604 23:13:12.506112    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:14.888039    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:14.900262    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:14.900262    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:17.688030    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:17.700752    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:17.706504    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:17.715039    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:17.715039    6196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:13:17.845663    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:13:17.845786    6196 buildroot.go:166] provisioning hostname "multinode-022000"
	I0604 23:13:17.845786    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:20.170099    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:20.170099    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:20.183414    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:22.958992    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:22.958992    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:22.977984    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:22.978691    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:22.978691    6196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000 && echo "multinode-022000" | sudo tee /etc/hostname
	I0604 23:13:23.143053    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000
	
	I0604 23:13:23.143213    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:25.463698    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:25.463698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:25.463794    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:28.206323    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:28.206323    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:28.226179    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:28.226325    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:28.226325    6196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:13:28.375824    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:13:28.375824    6196 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:13:28.375824    6196 buildroot.go:174] setting up certificates
	I0604 23:13:28.375824    6196 provision.go:84] configureAuth start
	I0604 23:13:28.375824    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:30.675619    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:30.675619    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:30.688391    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:35.756530    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:35.756530    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:35.756803    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:38.552123    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:38.552123    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:38.552123    6196 provision.go:143] copyHostCerts
	I0604 23:13:38.564937    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:13:38.565210    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:13:38.565296    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:13:38.565741    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:13:38.567371    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:13:38.567670    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:13:38.567670    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:13:38.567670    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:13:38.569073    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:13:38.569251    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:13:38.569251    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:13:38.569791    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:13:38.570720    6196 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000 san=[127.0.0.1 172.20.128.97 localhost minikube multinode-022000]
	I0604 23:13:38.743476    6196 provision.go:177] copyRemoteCerts
	I0604 23:13:38.762175    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:13:38.762175    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:41.045880    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:41.045880    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:41.058397    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:43.813292    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:43.813410    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:43.813903    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:13:43.922365    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1601493s)
	I0604 23:13:43.922365    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:13:43.922982    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:13:43.973646    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:13:43.973779    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0604 23:13:44.027509    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:13:44.028174    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 23:13:44.080145    6196 provision.go:87] duration metric: took 15.7041342s to configureAuth
	I0604 23:13:44.080215    6196 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:13:44.080837    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:13:44.080837    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:46.363849    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:46.364079    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:46.364150    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:49.068168    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:49.079773    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:49.085589    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:49.086156    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:49.086230    6196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:13:49.220934    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:13:49.221023    6196 buildroot.go:70] root file system type: tmpfs
	I0604 23:13:49.221180    6196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:13:49.221340    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:51.528476    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:51.529126    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:51.529254    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:54.297161    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:54.308433    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:54.315013    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:54.315628    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:54.315628    6196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:13:54.478292    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:13:54.478292    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:56.783950    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:56.789307    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:56.789307    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:59.494952    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:59.495012    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:59.500436    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:59.500977    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:59.501165    6196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 23:14:01.650321    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 23:14:01.650321    6196 machine.go:97] duration metric: took 49.1438276s to provisionDockerMachine
	I0604 23:14:01.650321    6196 client.go:171] duration metric: took 2m4.819558s to LocalClient.Create
	I0604 23:14:01.650321    6196 start.go:167] duration metric: took 2m4.819558s to libmachine.API.Create "multinode-022000"
	I0604 23:14:01.650321    6196 start.go:293] postStartSetup for "multinode-022000" (driver="hyperv")
	I0604 23:14:01.650321    6196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 23:14:01.666286    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 23:14:01.666286    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:03.937693    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:03.940635    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:03.940635    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:06.685204    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:06.685204    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:06.685501    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:06.803466    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1371404s)
	I0604 23:14:06.817959    6196 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 23:14:06.828555    6196 command_runner.go:130] > NAME=Buildroot
	I0604 23:14:06.828555    6196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0604 23:14:06.828555    6196 command_runner.go:130] > ID=buildroot
	I0604 23:14:06.828555    6196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0604 23:14:06.828555    6196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0604 23:14:06.828555    6196 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 23:14:06.828555    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 23:14:06.829193    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 23:14:06.830383    6196 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 23:14:06.830383    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 23:14:06.842483    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 23:14:06.863834    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 23:14:06.909377    6196 start.go:296] duration metric: took 5.2590154s for postStartSetup
	I0604 23:14:06.911704    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:09.239572    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:09.239572    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:09.252777    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:12.004165    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:12.004165    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:12.004340    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:14:12.007388    6196 start.go:128] duration metric: took 2m15.1810232s to createHost
	I0604 23:14:12.007388    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:17.085744    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:17.085830    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:17.092956    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:14:17.092956    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:14:17.093535    6196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 23:14:17.223803    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717542857.223378729
	
	I0604 23:14:17.223951    6196 fix.go:216] guest clock: 1717542857.223378729
	I0604 23:14:17.223951    6196 fix.go:229] Guest: 2024-06-04 23:14:17.223378729 +0000 UTC Remote: 2024-06-04 23:14:12.0073882 +0000 UTC m=+141.244022401 (delta=5.215990529s)
	I0604 23:14:17.224064    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:19.483513    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:19.483513    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:19.498605    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:22.211109    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:22.216211    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:22.225606    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:14:22.226847    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:14:22.226847    6196 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717542857
	I0604 23:14:22.366649    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:14:17 UTC 2024
	
	I0604 23:14:22.366722    6196 fix.go:236] clock set: Tue Jun  4 23:14:17 UTC 2024
	 (err=<nil>)
	I0604 23:14:22.366722    6196 start.go:83] releasing machines lock for "multinode-022000", held for 2m25.5404854s
	I0604 23:14:22.366722    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:24.680421    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:24.680495    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:24.680495    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:27.427914    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:27.427914    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:27.431897    6196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 23:14:27.432434    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:27.441472    6196 ssh_runner.go:195] Run: cat /version.json
	I0604 23:14:27.441472    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:29.791546    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:29.791546    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:29.791840    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:29.819483    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:29.819606    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:29.819683    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:32.605660    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:32.605660    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:32.619034    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:32.655696    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:32.655696    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:32.656464    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:32.817403    6196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0604 23:14:32.817403    6196 command_runner.go:130] > {"iso_version": "v1.33.1-1717518792-19024", "kicbase_version": "v0.0.44-1717064182-18993", "minikube_version": "v1.33.1", "commit": "8ad41152cc14078867a3ba7f5e3c263f5bd90a46"}
	I0604 23:14:32.817403    6196 ssh_runner.go:235] Completed: cat /version.json: (5.3758894s)
	I0604 23:14:32.817403    6196 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3854644s)
	I0604 23:14:32.832048    6196 ssh_runner.go:195] Run: systemctl --version
	I0604 23:14:32.842197    6196 command_runner.go:130] > systemd 252 (252)
	I0604 23:14:32.842498    6196 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0604 23:14:32.856792    6196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 23:14:32.865996    6196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0604 23:14:32.866374    6196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 23:14:32.879524    6196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 23:14:32.902950    6196 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0604 23:14:32.902950    6196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 23:14:32.902950    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:14:32.902950    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:14:32.946397    6196 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0604 23:14:32.961318    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 23:14:33.000133    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 23:14:33.020453    6196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 23:14:33.032247    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 23:14:33.066306    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:14:33.107770    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 23:14:33.142667    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:14:33.180576    6196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 23:14:33.216304    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 23:14:33.250091    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 23:14:33.290076    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 23:14:33.324248    6196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 23:14:33.345059    6196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0604 23:14:33.356958    6196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 23:14:33.394050    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:33.604494    6196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 23:14:33.645054    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:14:33.660200    6196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 23:14:33.689981    6196 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0604 23:14:33.689981    6196 command_runner.go:130] > [Unit]
	I0604 23:14:33.689981    6196 command_runner.go:130] > Description=Docker Application Container Engine
	I0604 23:14:33.690092    6196 command_runner.go:130] > Documentation=https://docs.docker.com
	I0604 23:14:33.690092    6196 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0604 23:14:33.690092    6196 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0604 23:14:33.690129    6196 command_runner.go:130] > StartLimitBurst=3
	I0604 23:14:33.690129    6196 command_runner.go:130] > StartLimitIntervalSec=60
	I0604 23:14:33.690129    6196 command_runner.go:130] > [Service]
	I0604 23:14:33.690129    6196 command_runner.go:130] > Type=notify
	I0604 23:14:33.690129    6196 command_runner.go:130] > Restart=on-failure
	I0604 23:14:33.690129    6196 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0604 23:14:33.690129    6196 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0604 23:14:33.690129    6196 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0604 23:14:33.690129    6196 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0604 23:14:33.690129    6196 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0604 23:14:33.690129    6196 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0604 23:14:33.690251    6196 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0604 23:14:33.690251    6196 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0604 23:14:33.690251    6196 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0604 23:14:33.690251    6196 command_runner.go:130] > ExecStart=
	I0604 23:14:33.690304    6196 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0604 23:14:33.690349    6196 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0604 23:14:33.690349    6196 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0604 23:14:33.690388    6196 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0604 23:14:33.690388    6196 command_runner.go:130] > LimitNOFILE=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > LimitNPROC=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > LimitCORE=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0604 23:14:33.690421    6196 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0604 23:14:33.690421    6196 command_runner.go:130] > TasksMax=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > TimeoutStartSec=0
	I0604 23:14:33.690421    6196 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0604 23:14:33.690421    6196 command_runner.go:130] > Delegate=yes
	I0604 23:14:33.690421    6196 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0604 23:14:33.690421    6196 command_runner.go:130] > KillMode=process
	I0604 23:14:33.690421    6196 command_runner.go:130] > [Install]
	I0604 23:14:33.690421    6196 command_runner.go:130] > WantedBy=multi-user.target
	I0604 23:14:33.705097    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:14:33.740650    6196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 23:14:33.796976    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:14:33.839772    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:14:33.875668    6196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 23:14:33.949072    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:14:33.982025    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:14:34.021163    6196 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0604 23:14:34.037053    6196 ssh_runner.go:195] Run: which cri-dockerd
	I0604 23:14:34.043322    6196 command_runner.go:130] > /usr/bin/cri-dockerd
	I0604 23:14:34.060157    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 23:14:34.087256    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 23:14:34.133391    6196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 23:14:34.352524    6196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 23:14:34.576338    6196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 23:14:34.576716    6196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 23:14:34.628206    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:34.830628    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:14:37.378641    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.547994s)
	I0604 23:14:37.394686    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 23:14:37.436697    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:14:37.478760    6196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 23:14:37.690904    6196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 23:14:37.903801    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:38.122302    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 23:14:38.174581    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:14:38.215469    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:38.441232    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 23:14:38.551485    6196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 23:14:38.565744    6196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 23:14:38.576662    6196 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0604 23:14:38.576662    6196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0604 23:14:38.576662    6196 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0604 23:14:38.576662    6196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0604 23:14:38.576662    6196 command_runner.go:130] > Access: 2024-06-04 23:14:38.466219427 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] > Modify: 2024-06-04 23:14:38.466219427 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] > Change: 2024-06-04 23:14:38.470219463 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] >  Birth: -
	I0604 23:14:38.576662    6196 start.go:562] Will wait 60s for crictl version
	I0604 23:14:38.589950    6196 ssh_runner.go:195] Run: which crictl
	I0604 23:14:38.592518    6196 command_runner.go:130] > /usr/bin/crictl
	I0604 23:14:38.608349    6196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 23:14:38.667046    6196 command_runner.go:130] > Version:  0.1.0
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeName:  docker
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeVersion:  26.1.3
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0604 23:14:38.668624    6196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 23:14:38.678541    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:14:38.710750    6196 command_runner.go:130] > 26.1.3
	I0604 23:14:38.720955    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:14:38.754644    6196 command_runner.go:130] > 26.1.3
	I0604 23:14:38.759490    6196 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 23:14:38.759490    6196 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 23:14:38.766488    6196 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 23:14:38.766488    6196 ip.go:210] interface addr: 172.20.128.1/20
	I0604 23:14:38.776971    6196 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 23:14:38.779153    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:14:38.806971    6196 kubeadm.go:877] updating cluster {Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 23:14:38.807164    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:14:38.818544    6196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 23:14:38.842682    6196 docker.go:685] Got preloaded images: 
	I0604 23:14:38.842682    6196 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0604 23:14:38.859382    6196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 23:14:38.880066    6196 command_runner.go:139] > {"Repositories":{}}
	I0604 23:14:38.892162    6196 ssh_runner.go:195] Run: which lz4
	I0604 23:14:38.898986    6196 command_runner.go:130] > /usr/bin/lz4
	I0604 23:14:38.898986    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0604 23:14:38.914435    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 23:14:38.922899    6196 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 23:14:38.922899    6196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 23:14:38.922899    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0604 23:14:41.268355    6196 docker.go:649] duration metric: took 2.3693502s to copy over tarball
	I0604 23:14:41.282742    6196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 23:14:49.856117    6196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5733096s)
	I0604 23:14:49.856192    6196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 23:14:49.918324    6196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 23:14:49.941818    6196 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0604 23:14:49.942046    6196 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0604 23:14:49.987611    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:50.219421    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:14:53.051938    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8324949s)
	I0604 23:14:53.065110    6196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0604 23:14:53.100033    6196 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 23:14:53.100033    6196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 23:14:53.100033    6196 cache_images.go:84] Images are preloaded, skipping loading
	I0604 23:14:53.100033    6196 kubeadm.go:928] updating node { 172.20.128.97 8443 v1.30.1 docker true true} ...
	I0604 23:14:53.100033    6196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-022000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.128.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 23:14:53.111873    6196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 23:14:53.151882    6196 command_runner.go:130] > cgroupfs
	I0604 23:14:53.152255    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:14:53.152255    6196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 23:14:53.152255    6196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 23:14:53.152255    6196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.128.97 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022000 NodeName:multinode-022000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.128.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.128.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 23:14:53.153081    6196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.128.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-022000"
	  kubeletExtraArgs:
	    node-ip: 172.20.128.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.128.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 23:14:53.168653    6196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubeadm
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubectl
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubelet
	I0604 23:14:53.187974    6196 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 23:14:53.203850    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0604 23:14:53.229310    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0604 23:14:53.266279    6196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 23:14:53.299520    6196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0604 23:14:53.346531    6196 ssh_runner.go:195] Run: grep 172.20.128.97	control-plane.minikube.internal$ /etc/hosts
	I0604 23:14:53.354385    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.128.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:14:53.393999    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:53.601151    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:14:53.636447    6196 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000 for IP: 172.20.128.97
	I0604 23:14:53.636447    6196 certs.go:194] generating shared ca certs ...
	I0604 23:14:53.636447    6196 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:53.637372    6196 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 23:14:53.637556    6196 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 23:14:53.637556    6196 certs.go:256] generating profile certs ...
	I0604 23:14:53.638304    6196 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key
	I0604 23:14:53.638304    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt with IP's: []
	I0604 23:14:54.346251    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt ...
	I0604 23:14:54.346251    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt: {Name:mk15651533d2efea0de6b736ab8260c3beb97c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.351244    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key ...
	I0604 23:14:54.351244    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key: {Name:mkbf57425a5409edb8a1d018ad39981898254d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.353262    6196 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba
	I0604 23:14:54.353262    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.128.97]
	I0604 23:14:54.907956    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba ...
	I0604 23:14:54.907956    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba: {Name:mk3371512a998263025c8a2ad881a0c7ecef2f88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.909152    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba ...
	I0604 23:14:54.909152    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba: {Name:mk18ef8c9344444a9f2801dc94bc33a4bf8c1ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.910545    6196 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt
	I0604 23:14:54.918367    6196 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key
	I0604 23:14:54.926626    6196 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key
	I0604 23:14:54.926626    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt with IP's: []
	I0604 23:14:55.150107    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt ...
	I0604 23:14:55.150107    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt: {Name:mkb46a357200a337890a4d66bfd25e7283ff83ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:55.160389    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key ...
	I0604 23:14:55.160389    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key: {Name:mk1a03424a8e09d3e0f3edd9d29dfdb81ce7a4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:55.161449    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 23:14:55.162511    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 23:14:55.162793    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 23:14:55.163053    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 23:14:55.163053    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 23:14:55.163601    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 23:14:55.163735    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 23:14:55.171052    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 23:14:55.173838    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 23:14:55.173896    6196 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 23:14:55.173896    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 23:14:55.173896    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 23:14:55.174753    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 23:14:55.175018    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 23:14:55.175018    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 23:14:55.175753    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.176087    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 23:14:55.176453    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.177703    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 23:14:55.238880    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 23:14:55.292377    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 23:14:55.337695    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 23:14:55.401628    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0604 23:14:55.455861    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0604 23:14:55.514523    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 23:14:55.564820    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0604 23:14:55.613750    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 23:14:55.664171    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 23:14:55.719186    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 23:14:55.772767    6196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 23:14:55.820061    6196 ssh_runner.go:195] Run: openssl version
	I0604 23:14:55.830151    6196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0604 23:14:55.843407    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 23:14:55.879153    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.887250    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.887250    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.898424    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.908410    6196 command_runner.go:130] > b5213941
	I0604 23:14:55.920873    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 23:14:55.956538    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 23:14:55.988004    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.992914    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.992914    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:14:56.008987    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 23:14:56.018082    6196 command_runner.go:130] > 51391683
	I0604 23:14:56.032589    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 23:14:56.069759    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 23:14:56.102318    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.111789    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.112070    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.123319    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.133277    6196 command_runner.go:130] > 3ec20f2e
	I0604 23:14:56.149033    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 23:14:56.184488    6196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 23:14:56.187330    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:14:56.190634    6196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:14:56.191049    6196 kubeadm.go:391] StartCluster: {Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:14:56.200448    6196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 23:14:56.239782    6196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0604 23:14:56.271246    6196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 23:14:56.305672    6196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 23:14:56.325913    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 23:14:56.326565    6196 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 23:14:56.326565    6196 kubeadm.go:156] found existing configuration files:
	
	I0604 23:14:56.338027    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 23:14:56.360734    6196 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 23:14:56.366350    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 23:14:56.380163    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 23:14:56.412300    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 23:14:56.427161    6196 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 23:14:56.433172    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 23:14:56.445004    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 23:14:56.479418    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 23:14:56.491117    6196 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 23:14:56.491117    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 23:14:56.512442    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 23:14:56.550482    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 23:14:56.569714    6196 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 23:14:56.570620    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 23:14:56.583476    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 23:14:56.604447    6196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 23:14:57.057002    6196 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:14:57.057099    6196 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:15:11.814254    6196 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 23:15:11.814254    6196 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0604 23:15:11.814254    6196 command_runner.go:130] > [preflight] Running pre-flight checks
	I0604 23:15:11.814254    6196 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 23:15:11.814254    6196 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 23:15:11.814254    6196 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 23:15:11.814879    6196 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 23:15:11.814879    6196 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 23:15:11.815027    6196 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 23:15:11.815147    6196 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 23:15:11.815499    6196 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 23:15:11.815499    6196 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 23:15:11.818530    6196 out.go:204]   - Generating certificates and keys ...
	I0604 23:15:11.818879    6196 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0604 23:15:11.818958    6196 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 23:15:11.819669    6196 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0604 23:15:11.819669    6196 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 23:15:11.819815    6196 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0604 23:15:11.819906    6196 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 23:15:11.820105    6196 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0604 23:15:11.820105    6196 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 23:15:11.820385    6196 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 23:15:11.820455    6196 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0604 23:15:11.821906    6196 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 23:15:11.821906    6196 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 23:15:11.822008    6196 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 23:15:11.822612    6196 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 23:15:11.822885    6196 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 23:15:11.825781    6196 out.go:204]   - Booting up control plane ...
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 23:15:11.826613    6196 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:15:11.826613    6196 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:15:11.826847    6196 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:15:11.826847    6196 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:15:11.826847    6196 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0604 23:15:11.826847    6196 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 23:15:11.827304    6196 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 23:15:11.827304    6196 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 23:15:11.827304    6196 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.502489831s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502489831s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [api-check] The API server is healthy after 7.50349621s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [api-check] The API server is healthy after 7.50349621s
	I0604 23:15:11.828073    6196 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 23:15:11.828129    6196 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 23:15:11.828213    6196 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 23:15:11.828213    6196 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 23:15:11.828213    6196 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0604 23:15:11.828213    6196 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 23:15:11.828858    6196 kubeadm.go:309] [mark-control-plane] Marking the node multinode-022000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 23:15:11.828858    6196 command_runner.go:130] > [mark-control-plane] Marking the node multinode-022000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 23:15:11.828858    6196 kubeadm.go:309] [bootstrap-token] Using token: fs2z3x.tlj9242qgak2cvhr
	I0604 23:15:11.828858    6196 command_runner.go:130] > [bootstrap-token] Using token: fs2z3x.tlj9242qgak2cvhr
	I0604 23:15:11.831396    6196 out.go:204]   - Configuring RBAC rules ...
	I0604 23:15:11.834649    6196 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 23:15:11.834649    6196 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 23:15:11.834649    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 23:15:11.834649    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 23:15:11.835214    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 23:15:11.835214    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 23:15:11.835409    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 23:15:11.835409    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 23:15:11.835409    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 23:15:11.835949    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 23:15:11.835996    6196 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 23:15:11.835996    6196 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 23:15:11.835996    6196 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 23:15:11.835996    6196 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 23:15:11.835996    6196 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0604 23:15:11.835996    6196 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 23:15:11.836683    6196 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 23:15:11.836683    6196 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0604 23:15:11.836683    6196 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0604 23:15:11.836683    6196 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0604 23:15:11.837225    6196 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 23:15:11.837273    6196 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 23:15:11.837273    6196 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 23:15:11.837382    6196 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 23:15:11.837382    6196 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0604 23:15:11.837382    6196 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 23:15:11.837382    6196 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0604 23:15:11.837964    6196 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 23:15:11.838009    6196 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 23:15:11.838009    6196 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 23:15:11.838009    6196 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 23:15:11.838009    6196 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 23:15:11.838009    6196 kubeadm.go:309] 
	I0604 23:15:11.838009    6196 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 23:15:11.838542    6196 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0604 23:15:11.838693    6196 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 23:15:11.838737    6196 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0604 23:15:11.838737    6196 kubeadm.go:309] 
	I0604 23:15:11.838737    6196 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.838737    6196 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839076    6196 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 23:15:11.839076    6196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 23:15:11.839076    6196 kubeadm.go:309] 	--control-plane 
	I0604 23:15:11.839076    6196 command_runner.go:130] > 	--control-plane 
	I0604 23:15:11.839076    6196 kubeadm.go:309] 
	I0604 23:15:11.839355    6196 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 23:15:11.839355    6196 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0604 23:15:11.839355    6196 kubeadm.go:309] 
	I0604 23:15:11.839355    6196 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839355    6196 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839355    6196 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:15:11.839355    6196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:15:11.839355    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:15:11.839355    6196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 23:15:11.840623    6196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0604 23:15:11.859991    6196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0604 23:15:11.868798    6196 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0604 23:15:11.868798    6196 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0604 23:15:11.868798    6196 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0604 23:15:11.868798    6196 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0604 23:15:11.868798    6196 command_runner.go:130] > Access: 2024-06-04 23:13:06.646457700 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] > Modify: 2024-06-04 20:55:58.000000000 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] > Change: 2024-06-04 23:12:58.070000000 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] >  Birth: -
	I0604 23:15:11.868798    6196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0604 23:15:11.868798    6196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0604 23:15:11.918092    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0604 23:15:12.354092    6196 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > serviceaccount/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > daemonset.apps/kindnet created
	I0604 23:15:12.354218    6196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 23:15:12.374219    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:12.375265    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-022000 minikube.k8s.io/updated_at=2024_06_04T23_15_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=multinode-022000 minikube.k8s.io/primary=true
	I0604 23:15:12.380446    6196 command_runner.go:130] > -16
	I0604 23:15:12.380446    6196 ops.go:34] apiserver oom_adj: -16
	I0604 23:15:12.584305    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0604 23:15:12.589832    6196 command_runner.go:130] > node/multinode-022000 labeled
	I0604 23:15:12.599351    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:12.742639    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:13.115742    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:13.236723    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:13.618293    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:13.737240    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:14.113848    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:14.221272    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:14.600491    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:14.708338    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:15.104814    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:15.203342    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:15.615158    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:15.720265    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:16.096495    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:16.213033    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:16.597273    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:16.712009    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:17.109214    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:17.223941    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:17.608086    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:17.740862    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:18.101412    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:18.209689    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:18.619010    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:18.730047    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:19.108983    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:19.223667    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:19.612729    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:19.737656    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:20.106209    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:20.217605    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:20.599134    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:20.709610    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:21.106521    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:21.227066    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:21.615437    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:21.728309    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:22.105319    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:22.214620    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:22.598134    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:22.735396    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:23.115964    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:23.225821    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:23.606058    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:23.730992    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:24.105458    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:24.234573    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:24.617412    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:24.768031    6196 command_runner.go:130] > NAME      SECRETS   AGE
	I0604 23:15:24.768077    6196 command_runner.go:130] > default   0         0s
	I0604 23:15:24.768358    6196 kubeadm.go:1107] duration metric: took 12.4140455s to wait for elevateKubeSystemPrivileges
	W0604 23:15:24.768358    6196 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 23:15:24.768358    6196 kubeadm.go:393] duration metric: took 28.5770916s to StartCluster
	I0604 23:15:24.768358    6196 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:15:24.768358    6196 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:24.770670    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:15:24.772442    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 23:15:24.772442    6196 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 23:15:24.772442    6196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0604 23:15:24.777940    6196 out.go:177] * Verifying Kubernetes components...
	I0604 23:15:24.772442    6196 addons.go:69] Setting storage-provisioner=true in profile "multinode-022000"
	I0604 23:15:24.772442    6196 addons.go:69] Setting default-storageclass=true in profile "multinode-022000"
	I0604 23:15:24.773172    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:24.780892    6196 addons.go:234] Setting addon storage-provisioner=true in "multinode-022000"
	I0604 23:15:24.780959    6196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-022000"
	I0604 23:15:24.781051    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:15:24.781106    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:24.783678    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:24.798420    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:15:25.040504    6196 command_runner.go:130] > apiVersion: v1
	I0604 23:15:25.040504    6196 command_runner.go:130] > data:
	I0604 23:15:25.040504    6196 command_runner.go:130] >   Corefile: |
	I0604 23:15:25.040504    6196 command_runner.go:130] >     .:53 {
	I0604 23:15:25.040504    6196 command_runner.go:130] >         errors
	I0604 23:15:25.040504    6196 command_runner.go:130] >         health {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            lameduck 5s
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         ready
	I0604 23:15:25.040504    6196 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            pods insecure
	I0604 23:15:25.040504    6196 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0604 23:15:25.040504    6196 command_runner.go:130] >            ttl 30
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         prometheus :9153
	I0604 23:15:25.040504    6196 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            max_concurrent 1000
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         cache 30
	I0604 23:15:25.040504    6196 command_runner.go:130] >         loop
	I0604 23:15:25.040504    6196 command_runner.go:130] >         reload
	I0604 23:15:25.040504    6196 command_runner.go:130] >         loadbalance
	I0604 23:15:25.040504    6196 command_runner.go:130] >     }
	I0604 23:15:25.040504    6196 command_runner.go:130] > kind: ConfigMap
	I0604 23:15:25.040504    6196 command_runner.go:130] > metadata:
	I0604 23:15:25.040504    6196 command_runner.go:130] >   creationTimestamp: "2024-06-04T23:15:11Z"
	I0604 23:15:25.040504    6196 command_runner.go:130] >   name: coredns
	I0604 23:15:25.040504    6196 command_runner.go:130] >   namespace: kube-system
	I0604 23:15:25.040504    6196 command_runner.go:130] >   resourceVersion: "231"
	I0604 23:15:25.040504    6196 command_runner.go:130] >   uid: 76c64db5-87c5-4704-a57d-c416baff3d22
	I0604 23:15:25.040504    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 23:15:25.179821    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:15:25.658423    6196 command_runner.go:130] > configmap/coredns replaced
	I0604 23:15:25.658669    6196 start.go:946] {"host.minikube.internal": 172.20.128.1} host record injected into CoreDNS's ConfigMap
	I0604 23:15:25.660332    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:25.660730    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:25.661992    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:25.662206    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:25.664409    6196 cert_rotation.go:137] Starting client certificate rotation controller
	I0604 23:15:25.665126    6196 node_ready.go:35] waiting up to 6m0s for node "multinode-022000" to be "Ready" ...
	I0604 23:15:25.665505    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:25.665505    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:25.665564    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.665564    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.665564    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.665657    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.665690    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.665690    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.696775    6196 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0604 23:15:25.696822    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.696822    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.696822    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Audit-Id: fecd57eb-5ddb-4f78-be51-26c78c9d6fca
	I0604 23:15:25.696822    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"360","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:25.697815    6196 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0604 23:15:25.697815    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.697815    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Audit-Id: 10659bc8-98a8-48f8-8eb3-112d0ab2bdbc
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.697815    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.697815    6196 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"360","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:25.700249    6196 round_trippers.go:463] PUT https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:25.700249    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.700249    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.700249    6196 round_trippers.go:473]     Content-Type: application/json
	I0604 23:15:25.700249    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.702872    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:25.726666    6196 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0604 23:15:25.730508    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Audit-Id: bf3a5c9c-dc7f-45fc-be3e-d7cb1b1e71dd
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.730508    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.730508    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.730591    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:25.730659    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"362","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:26.184871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:26.184871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:26.185329    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.185373    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.184871    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.185373    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.185373    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.185373    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.194558    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:26.194653    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Audit-Id: b8a5f5e8-950d-4741-bff6-92a281d8d6f1
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.194607    6196 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.194708    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Audit-Id: 1e33f977-5257-4bce-9542-84d0b678abbd
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.194708    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"373","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.195089    6196 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-022000" context rescaled to 1 replicas
	I0604 23:15:26.195381    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:26.685182    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:26.685182    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.685182    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.685182    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.687737    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:26.689640    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.689640    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.689640    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Audit-Id: fc2180c8-f4c7-4384-80a0-801bdad36980
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.695210    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.170161    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:27.170161    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:27.170258    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:27.170258    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:27.170580    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:27.174291    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Audit-Id: 9d95ed19-1620-4262-8deb-949720e02ae9
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:27.174291    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:27.174291    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:27 GMT
	I0604 23:15:27.174665    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:27.298589    6196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 23:15:27.288756    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:27.304601    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:27.321291    6196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 23:15:27.321350    6196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 23:15:27.321350    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:27.321350    6196 addons.go:234] Setting addon default-storageclass=true in "multinode-022000"
	I0604 23:15:27.321350    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:15:27.323373    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:27.683848    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:27.683848    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:27.683848    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:27.683848    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:27.700642    6196 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 23:15:27.700642    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:27.703249    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:27.703249    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:27 GMT
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Audit-Id: 3b2abc18-c863-4d70-a1e2-3f0ab41e3309
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:27.703480    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.703480    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:28.181012    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:28.181012    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:28.181117    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:28.181117    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:28.184827    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:28.184899    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Audit-Id: 9674cc69-74bf-4b69-a88f-e9b08cab16e2
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:28.184966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:28.184966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:28.185023    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:28 GMT
	I0604 23:15:28.185515    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:28.668128    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:28.668128    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:28.668128    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:28.668128    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:28.673499    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:28.673499    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:28.673499    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:28.673616    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:28.673616    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:28.673616    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:28.673616    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:28 GMT
	I0604 23:15:28.673672    6196 round_trippers.go:580]     Audit-Id: ba57bb6c-71db-4f10-a75f-cdbe838a378c
	I0604 23:15:28.673915    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.176396    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:29.176396    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:29.176396    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:29.176396    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:29.177435    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:29.177435    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:29.180239    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:29.180239    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:29 GMT
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Audit-Id: c2350ca2-ebc7-4b8d-a3ee-71aea592ceab
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:29.180385    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.670508    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:29.670590    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:29.670654    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:29.670654    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:29.674810    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:15:29.676412    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:29.676412    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:29.676412    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:29 GMT
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Audit-Id: 34b3ef29-0923-4520-a965-412cc1ffcdad
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:29.677106    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:29.817347    6196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 23:15:29.817347    6196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:29.868879    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:29.868971    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:29.868971    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:15:30.177524    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:30.177524    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:30.177524    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:30.177524    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:30.178787    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:30.178787    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:30.178787    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:30.178787    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:30 GMT
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Audit-Id: c7a43ddb-f866-4847-9f80-69decc4fa67e
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:30.182685    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:30.183052    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:30.666783    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:30.666903    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:30.666903    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:30.666903    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:30.668745    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:30.671245    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Audit-Id: fe6fcf36-59f5-451d-89e5-27e837a4b1c3
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:30.671245    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:30.671245    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:30 GMT
	I0604 23:15:30.671703    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:31.183756    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:31.183756    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:31.183756    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:31.183756    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:31.186158    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:31.187906    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Audit-Id: a24a5469-669b-4e26-affe-75409477066e
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:31.187906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:31.187906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:31 GMT
	I0604 23:15:31.187906    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:31.671969    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:31.672060    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:31.672060    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:31.672060    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:31.672592    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:31.676096    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Audit-Id: 28426735-6074-4bb6-ab13-8ddbe70565cd
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:31.676210    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:31.676210    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:31.676210    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:31 GMT
	I0604 23:15:31.676497    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.167135    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:32.167202    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:32.167268    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:32.167268    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:32.217983    6196 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0604 23:15:32.217983    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:32.217983    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:32.229042    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:32.229042    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:32 GMT
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Audit-Id: a488267f-4da8-4409-9866-564da63212ff
	I0604 23:15:32.229929    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.229993    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:32.297741    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:32.300327    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:32.300474    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:15:32.676469    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:32.676533    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:32.676587    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:32.676587    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:32.682072    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:32.682162    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:32.682162    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:32.682162    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:32 GMT
	I0604 23:15:32.682230    6196 round_trippers.go:580]     Audit-Id: a66248f9-8456-4ce5-986c-63863de0fa47
	I0604 23:15:32.682274    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:32.682274    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:32.682356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:32.682968    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.735581    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:15:32.735581    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:32.745280    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:15:32.938349    6196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 23:15:33.172154    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:33.172154    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:33.172154    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:33.172154    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:33.174411    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:33.174411    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:33.174411    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:33.174411    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:33.174411    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:33 GMT
	I0604 23:15:33.174411    6196 round_trippers.go:580]     Audit-Id: 24557e18-305b-4ae7-990f-e29872d6cc6b
	I0604 23:15:33.176193    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:33.176311    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:33.177052    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:33.507857    6196 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0604 23:15:33.507857    6196 command_runner.go:130] > pod/storage-provisioner created
	I0604 23:15:33.669326    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:33.669326    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:33.669326    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:33.669326    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:33.670449    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:33.673954    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:33.674141    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:33.674141    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:33 GMT
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Audit-Id: 02e1a508-d2b5-4f83-b815-bc1bed45b181
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:33.674141    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.178131    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:34.178131    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:34.178131    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:34.178131    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:34.182097    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:34.182097    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:34.182097    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:34.182097    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:34 GMT
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Audit-Id: ab0268bc-25bc-4178-afd1-198402cb645c
	I0604 23:15:34.182801    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.672069    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:34.672069    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:34.672235    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:34.672235    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:34.674269    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:34.674269    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Audit-Id: f9c14f6b-bc24-42b6-b797-75b7f8fdca76
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:34.677711    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:34.677711    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:34 GMT
	I0604 23:15:34.678581    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.679246    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:35.110799    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:15:35.110973    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:35.111264    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:15:35.168107    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:35.168408    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.168408    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.168408    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.174833    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:35.174833    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Audit-Id: a3a6b8e6-29ee-4764-b046-70d5825c34c6
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.174833    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.174833    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.175488    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:35.251403    6196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 23:15:35.427697    6196 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0604 23:15:35.428838    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/storage.k8s.io/v1/storageclasses
	I0604 23:15:35.428838    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.428838    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.428838    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.451007    6196 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0604 23:15:35.451007    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Content-Length: 1273
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Audit-Id: b85435c9-6871-4609-a178-07c9a4667f2f
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.451007    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.451007    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.451165    6196 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0604 23:15:35.451881    6196 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 23:15:35.451969    6196 round_trippers.go:463] PUT https://172.20.128.97:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0604 23:15:35.451969    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.451969    6196 round_trippers.go:473]     Content-Type: application/json
	I0604 23:15:35.451969    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.452050    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.460575    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:15:35.460575    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.460575    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.460575    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Content-Length: 1220
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Audit-Id: 4cfaecfc-2b1f-4192-b517-8c51ed6423a9
	I0604 23:15:35.460575    6196 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 23:15:35.464767    6196 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0604 23:15:35.470476    6196 addons.go:510] duration metric: took 10.6979529s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0604 23:15:35.677616    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:35.677616    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.678011    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.678011    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.678460    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:35.678460    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.678460    6196 round_trippers.go:580]     Audit-Id: e12f0207-4c05-4977-a5ec-4cd68a322b4a
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.682685    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.682685    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.682971    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:36.173350    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:36.173635    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:36.173694    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:36.173694    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:36.173918    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:36.173918    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:36.173918    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:36.173918    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:36 GMT
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Audit-Id: 447db061-39be-49da-be1f-ce2fc44cfa8f
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:36.179040    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:36.667591    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:36.667871    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:36.667871    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:36.667871    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:36.668652    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:36.671337    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:36 GMT
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Audit-Id: 0e8644cf-99bd-41fe-8558-fb4d4a616ba9
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:36.671402    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:36.671402    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:36.671402    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:36.671402    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:37.169282    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:37.169510    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:37.169618    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:37.169618    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:37.175953    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:37.176548    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:37.176548    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:37.176548    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:37 GMT
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Audit-Id: 60ea7881-6643-4f93-b578-92563307f922
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:37.176803    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:37.176803    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:37.667649    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:37.667902    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:37.667902    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:37.667902    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:37.668745    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:37.672699    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:37.672699    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:37 GMT
	I0604 23:15:37.672699    6196 round_trippers.go:580]     Audit-Id: 00970b83-42c6-462c-8e52-8e2fe2f11f93
	I0604 23:15:37.672818    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:37.672818    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:37.672818    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:37.672818    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:37.673116    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:38.170362    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.170614    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.170614    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.170614    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.171327    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:38.171327    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Audit-Id: 63ac2993-0282-4679-aa80-4fbe9394f44b
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.171327    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.171327    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.174749    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:38.679517    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.679517    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.679517    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.679517    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.682341    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:38.688997    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.688997    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.688997    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Audit-Id: 896f0e2a-fc8f-4543-8d64-efdff3824406
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.692052    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:38.693356    6196 node_ready.go:49] node "multinode-022000" has status "Ready":"True"
	I0604 23:15:38.694027    6196 node_ready.go:38] duration metric: took 13.028057s for node "multinode-022000" to be "Ready" ...
	I0604 23:15:38.694027    6196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:15:38.694027    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:38.694027    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.694027    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.694027    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.706648    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:15:38.706721    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Audit-Id: 27217599-9af4-4c38-9e37-414a11907a0a
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.706721    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.706721    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.707615    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"405","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0604 23:15:38.711929    6196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:38.712520    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:38.712520    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.712520    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.712624    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.720639    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:15:38.721037    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.721037    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.721037    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Audit-Id: 16530089-405c-4389-8b97-02b10335a3a3
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.721037    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:38.722388    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.722438    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.722487    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.722558    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.733579    6196 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 23:15:38.733579    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.733793    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.733793    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Audit-Id: 98c01cbe-1d14-4b15-ae27-6dacd6a71484
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.734153    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:39.220868    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:39.220868    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.220868    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.220868    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.221457    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.225296    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Audit-Id: d04e6ac8-76ea-4530-8ba4-fb317ffd1f9e
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.225296    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.225296    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.225535    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:39.227043    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:39.227178    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.227178    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.227178    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.229494    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:39.229494    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.229494    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.229494    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.229494    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.229494    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.229999    6196 round_trippers.go:580]     Audit-Id: 172b72b1-bd20-4aa6-990e-0d2d3f550654
	I0604 23:15:39.229999    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.230089    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:39.713807    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:39.713881    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.713881    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.713913    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.714735    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.714735    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.714735    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.714735    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Audit-Id: 7989a4d1-8eef-4294-befc-640c8f4179da
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.714735    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:39.718640    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:39.718770    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.718770    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.718770    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.718966    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.718966    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Audit-Id: 4a8e6722-b199-4de9-99d0-c8bbbff9564a
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.718966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.718966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.721515    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.219855    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:40.219971    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.219971    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.219971    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.227769    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:40.227769    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.227769    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.227769    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Audit-Id: 5cad6698-d572-4c20-a07b-3ba9567951a1
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.227769    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:40.228646    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:40.228776    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.228776    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.228776    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.231480    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:40.231480    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Audit-Id: 89624d5e-56f4-4ddc-b039-b89e88d82e48
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.231480    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.231480    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.239198    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.729420    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:40.729420    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.729420    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.729420    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.731184    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:40.731184    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.731184    6196 round_trippers.go:580]     Audit-Id: 00a096e0-9a0e-4cfa-9931-e153adc6dbb2
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.736356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.736356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.736600    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:40.737408    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:40.737408    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.737408    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.737408    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.741127    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:40.745931    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.745931    6196 round_trippers.go:580]     Audit-Id: 047e5035-9de6-46ec-9816-2ae223985e89
	I0604 23:15:40.745931    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.746038    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.746038    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.746038    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.746038    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.746284    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.746462    6196 pod_ready.go:102] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"False"
	I0604 23:15:41.225204    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:41.225204    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.225204    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.225280    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.225519    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.229758    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.229758    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.229758    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Audit-Id: 83c3e1d0-f6e3-4ee7-83fb-87a0cf8db857
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.229758    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0604 23:15:41.230829    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.230829    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.230905    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.230905    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.231083    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.231083    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Audit-Id: 6fb7d752-3294-447d-8c70-a37653a7a3a3
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.231083    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.231083    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.234025    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.234149    6196 pod_ready.go:92] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.234149    6196 pod_ready.go:81] duration metric: took 2.5216677s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.234149    6196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.234690    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-022000
	I0604 23:15:41.234690    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.234690    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.234690    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.234958    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.234958    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.234958    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.234958    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Audit-Id: 477a939d-f5b8-481d-afe9-605ae0f3ce81
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.238207    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022000","namespace":"kube-system","uid":"cf5ce7db-ab12-4be8-9e44-317caab1adeb","resourceVersion":"386","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.128.97:2379","kubernetes.io/config.hash":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.mirror":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.seen":"2024-06-04T23:15:11.311330236Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0604 23:15:41.239163    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.239163    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.239163    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.239163    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.239966    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.239966    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.239966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Audit-Id: be82e54d-8305-49e7-9403-e76c8df0e4eb
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.242330    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.242330    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.242330    6196 pod_ready.go:92] pod "etcd-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.242330    6196 pod_ready.go:81] duration metric: took 8.1808ms for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.242330    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.243068    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022000
	I0604 23:15:41.243127    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.243170    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.243192    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.244515    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:41.244515    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.244515    6196 round_trippers.go:580]     Audit-Id: 3ec69af6-eb51-4b16-b87c-315e3f3911cd
	I0604 23:15:41.244515    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.246298    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.246298    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.246298    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.246298    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.246565    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022000","namespace":"kube-system","uid":"a15ca283-cf36-4ce5-846a-37257524e217","resourceVersion":"385","creationTimestamp":"2024-06-04T23:15:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.128.97:8443","kubernetes.io/config.hash":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.mirror":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.seen":"2024-06-04T23:15:02.371587958Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0604 23:15:41.247272    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.247301    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.247353    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.247353    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.251275    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:41.251275    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Audit-Id: 5d57f454-e8f6-48e8-a938-68f5f87173c6
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.251275    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.251275    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.251275    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.251917    6196 pod_ready.go:92] pod "kube-apiserver-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.251917    6196 pod_ready.go:81] duration metric: took 9.5877ms for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.251917    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.251917    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022000
	I0604 23:15:41.251917    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.251917    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.251917    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.257211    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:41.257211    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.257211    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.257211    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Audit-Id: be914605-3423-4f1c-8bb8-42e72021db83
	I0604 23:15:41.257211    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022000","namespace":"kube-system","uid":"2bb46405-19fa-4ca8-afd5-6d6224271444","resourceVersion":"382","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.mirror":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.seen":"2024-06-04T23:15:11.311327436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0604 23:15:41.257944    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.257944    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.257944    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.257944    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.259220    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:41.259220    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.259220    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Audit-Id: 7cf6a7e2-64cd-4df7-a067-ceac68abb607
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.259220    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.259220    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.261197    6196 pod_ready.go:92] pod "kube-controller-manager-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.261235    6196 pod_ready.go:81] duration metric: took 9.3174ms for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.261277    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.261380    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:15:41.261418    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.261418    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.261462    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.263854    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:41.263854    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.263854    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.263854    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Audit-Id: b958e67e-21f2-47f5-8372-987853ff9a10
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.263854    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pbmpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be","resourceVersion":"378","creationTimestamp":"2024-06-04T23:15:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0604 23:15:41.265404    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.265614    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.265614    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.265614    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.265923    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.265923    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.265923    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.265923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.268657    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Audit-Id: e0f5f02b-1813-482b-9efc-d9f8df0e9e26
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.269068    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.269358    6196 pod_ready.go:92] pod "kube-proxy-pbmpr" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.269358    6196 pod_ready.go:81] duration metric: took 8.0807ms for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.269358    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.430467    6196 request.go:629] Waited for 160.6738ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:15:41.430693    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:15:41.430693    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.430769    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.430769    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.431075    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.431075    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.431075    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.431075    6196 round_trippers.go:580]     Audit-Id: f650af1a-85cb-41b1-be2c-28c816fc42c9
	I0604 23:15:41.434966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.434966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.434966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.434966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.435345    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022000","namespace":"kube-system","uid":"0453fac4-fec2-4a1f-80f7-c3192dae4ea5","resourceVersion":"384","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.mirror":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.seen":"2024-06-04T23:15:11.311328836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0604 23:15:41.636019    6196 request.go:629] Waited for 199.2011ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.636019    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.636019    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.636019    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.636019    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.636583    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.636583    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Audit-Id: 97a410fb-14da-4fd3-8b67-06b4c82c6da9
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.636583    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.640353    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.640353    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.640500    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.640949    6196 pod_ready.go:92] pod "kube-scheduler-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.640949    6196 pod_ready.go:81] duration metric: took 371.5883ms for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.640949    6196 pod_ready.go:38] duration metric: took 2.9468996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:15:41.641107    6196 api_server.go:52] waiting for apiserver process to appear ...
	I0604 23:15:41.654079    6196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 23:15:41.682999    6196 command_runner.go:130] > 2011
	I0604 23:15:41.682999    6196 api_server.go:72] duration metric: took 16.910428s to wait for apiserver process to appear ...
	I0604 23:15:41.682999    6196 api_server.go:88] waiting for apiserver healthz status ...
	I0604 23:15:41.683099    6196 api_server.go:253] Checking apiserver healthz at https://172.20.128.97:8443/healthz ...
	I0604 23:15:41.689749    6196 api_server.go:279] https://172.20.128.97:8443/healthz returned 200:
	ok
	I0604 23:15:41.690816    6196 round_trippers.go:463] GET https://172.20.128.97:8443/version
	I0604 23:15:41.690816    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.690816    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.690816    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.693807    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:41.693807    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Content-Length: 263
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Audit-Id: c4f65590-ad40-49eb-9d5c-d075d8a9623e
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.693807    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.693807    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.693807    6196 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0604 23:15:41.693807    6196 api_server.go:141] control plane version: v1.30.1
	I0604 23:15:41.694338    6196 api_server.go:131] duration metric: took 11.3391ms to wait for apiserver health ...
	I0604 23:15:41.694338    6196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 23:15:41.825588    6196 request.go:629] Waited for 130.8592ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:41.825588    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:41.825588    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.825588    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.825588    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.831583    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.831626    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.831626    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.831626    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Audit-Id: a1dd8dad-c761-4f40-9c96-59a65ba9a574
	I0604 23:15:41.832904    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0604 23:15:41.835551    6196 system_pods.go:59] 8 kube-system pods found
	I0604 23:15:41.835551    6196 system_pods.go:61] "coredns-7db6d8ff4d-mlh9s" [15497b54-7964-47a8-9dc8-89c225f6b842] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "etcd-multinode-022000" [cf5ce7db-ab12-4be8-9e44-317caab1adeb] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kindnet-s279j" [68ac1199-4b19-4f5d-99d5-701006fac840] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-apiserver-multinode-022000" [a15ca283-cf36-4ce5-846a-37257524e217] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-controller-manager-multinode-022000" [2bb46405-19fa-4ca8-afd5-6d6224271444] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-proxy-pbmpr" [ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-scheduler-multinode-022000" [0453fac4-fec2-4a1f-80f7-c3192dae4ea5] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "storage-provisioner" [b56880e3-c751-42af-b85d-0ce47f4415ee] Running
	I0604 23:15:41.835551    6196 system_pods.go:74] duration metric: took 141.2122ms to wait for pod list to return data ...
	I0604 23:15:41.835551    6196 default_sa.go:34] waiting for default service account to be created ...
	I0604 23:15:42.028219    6196 request.go:629] Waited for 192.5469ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/default/serviceaccounts
	I0604 23:15:42.028395    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/default/serviceaccounts
	I0604 23:15:42.028395    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.028395    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.028395    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.029214    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:42.029214    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Audit-Id: 56108434-aa1f-4b19-a5e9-7b19c021ae7b
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.029214    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.029214    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Content-Length: 261
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.031901    6196 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6c5f1584-41ab-41e8-b2ea-c87ef904e212","resourceVersion":"319","creationTimestamp":"2024-06-04T23:15:24Z"}}]}
	I0604 23:15:42.031951    6196 default_sa.go:45] found service account: "default"
	I0604 23:15:42.031951    6196 default_sa.go:55] duration metric: took 196.3984ms for default service account to be created ...
	I0604 23:15:42.031951    6196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 23:15:42.248682    6196 request.go:629] Waited for 216.5522ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:42.248682    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:42.248902    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.248902    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.248902    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.260942    6196 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 23:15:42.260942    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Audit-Id: 4ec25d06-029e-405c-b93f-2477d94cadb9
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.260942    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.260942    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.261540    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0604 23:15:42.264497    6196 system_pods.go:86] 8 kube-system pods found
	I0604 23:15:42.264497    6196 system_pods.go:89] "coredns-7db6d8ff4d-mlh9s" [15497b54-7964-47a8-9dc8-89c225f6b842] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "etcd-multinode-022000" [cf5ce7db-ab12-4be8-9e44-317caab1adeb] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kindnet-s279j" [68ac1199-4b19-4f5d-99d5-701006fac840] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-apiserver-multinode-022000" [a15ca283-cf36-4ce5-846a-37257524e217] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-controller-manager-multinode-022000" [2bb46405-19fa-4ca8-afd5-6d6224271444] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-proxy-pbmpr" [ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-scheduler-multinode-022000" [0453fac4-fec2-4a1f-80f7-c3192dae4ea5] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "storage-provisioner" [b56880e3-c751-42af-b85d-0ce47f4415ee] Running
	I0604 23:15:42.264497    6196 system_pods.go:126] duration metric: took 232.5444ms to wait for k8s-apps to be running ...
	I0604 23:15:42.264497    6196 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 23:15:42.276665    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:15:42.304744    6196 system_svc.go:56] duration metric: took 40.1726ms WaitForService to wait for kubelet
	I0604 23:15:42.304744    6196 kubeadm.go:576] duration metric: took 17.532168s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:15:42.304744    6196 node_conditions.go:102] verifying NodePressure condition ...
	I0604 23:15:42.434079    6196 request.go:629] Waited for 129.3346ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes
	I0604 23:15:42.434637    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes
	I0604 23:15:42.434637    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.434710    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.434710    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.442663    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:15:42.442710    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.442710    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Audit-Id: 2b79b053-5719-4ee4-acfa-4dc4a6fbba03
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.442710    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.442710    6196 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0604 23:15:42.443409    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:15:42.443458    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:15:42.443506    6196 node_conditions.go:105] duration metric: took 138.7615ms to run NodePressure ...
	I0604 23:15:42.443506    6196 start.go:240] waiting for startup goroutines ...
	I0604 23:15:42.443506    6196 start.go:245] waiting for cluster config update ...
	I0604 23:15:42.443506    6196 start.go:254] writing updated cluster config ...
	I0604 23:15:42.449133    6196 out.go:177] 
	I0604 23:15:42.451975    6196 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:42.457667    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:42.457667    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:15:42.468902    6196 out.go:177] * Starting "multinode-022000-m02" worker node in "multinode-022000" cluster
	I0604 23:15:42.471464    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:15:42.471464    6196 cache.go:56] Caching tarball of preloaded images
	I0604 23:15:42.472148    6196 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:15:42.472148    6196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:15:42.472148    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:15:42.474631    6196 start.go:360] acquireMachinesLock for multinode-022000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:15:42.475221    6196 start.go:364] duration metric: took 589.1µs to acquireMachinesLock for "multinode-022000-m02"
	I0604 23:15:42.475332    6196 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:15:42.475332    6196 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0604 23:15:42.481198    6196 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 23:15:42.481313    6196 start.go:159] libmachine.API.Create for "multinode-022000" (driver="hyperv")
	I0604 23:15:42.481313    6196 client.go:168] LocalClient.Create starting
	I0604 23:15:42.481966    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 23:15:42.482163    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:15:42.482253    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:15:42.482417    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 23:15:42.482654    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:15:42.482733    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:15:42.482805    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 23:15:44.514720    6196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 23:15:44.514720    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:44.514890    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:15:52.107727    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:15:52.107727    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:52.121654    6196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 23:15:52.647323    6196 main.go:141] libmachine: Creating SSH key...
	I0604 23:15:53.109523    6196 main.go:141] libmachine: Creating VM...
	I0604 23:15:53.109523    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:15:56.298140    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:15:56.312029    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:56.312029    6196 main.go:141] libmachine: Using switch "Default Switch"
	I0604 23:15:56.312029    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:58.188643    6196 main.go:141] libmachine: Creating VHD
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 23:16:02.204658    6196 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C41BDB5-62B8-44D6-88EC-43151DCA7638
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 23:16:02.204658    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:02.204658    6196 main.go:141] libmachine: Writing magic tar header
	I0604 23:16:02.204951    6196 main.go:141] libmachine: Writing SSH key tar header
	I0604 23:16:02.205729    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 23:16:05.558106    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:05.558106    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:05.572818    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd' -SizeBytes 20000MB
	I0604 23:16:08.284641    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:08.284743    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:08.284743    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 23:16:12.197880    6196 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-022000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 23:16:12.211692    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:12.211692    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-022000-m02 -DynamicMemoryEnabled $false
	I0604 23:16:14.661969    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:14.673712    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:14.673712    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-022000-m02 -Count 2
	I0604 23:16:17.033453    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:17.033453    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:17.047627    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\boot2docker.iso'
	I0604 23:16:19.864987    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:19.864987    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:19.877331    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd'
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:22.769698    6196 main.go:141] libmachine: Starting VM...
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000-m02
	I0604 23:16:26.159678    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:26.159812    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:26.159812    6196 main.go:141] libmachine: Waiting for host to start...
	I0604 23:16:26.159812    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:28.648730    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:28.649504    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:28.649597    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:31.408737    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:31.408737    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:32.413950    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:34.836840    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:34.836840    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:34.849790    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:37.591191    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:37.591191    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:38.597239    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:40.968166    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:40.981014    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:40.981086    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:43.768791    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:43.775649    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:44.777364    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:49.936966    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:49.936966    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:50.937853    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:53.403925    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:53.409223    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:53.409297    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:56.223975    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:16:56.236119    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:56.236119    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:58.565169    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:58.565169    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:58.565169    6196 machine.go:94] provisionDockerMachine start ...
	I0604 23:16:58.572380    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:00.877597    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:00.877597    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:00.890044    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:03.634831    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:03.634831    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:03.653098    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:03.654173    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:03.654243    6196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:17:03.779399    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:17:03.779484    6196 buildroot.go:166] provisioning hostname "multinode-022000-m02"
	I0604 23:17:03.779565    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:06.152838    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:06.152838    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:06.153007    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:08.976532    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:08.976532    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:08.993610    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:08.993728    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:08.993728    6196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000-m02 && echo "multinode-022000-m02" | sudo tee /etc/hostname
	I0604 23:17:09.151794    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000-m02
	
	I0604 23:17:09.151794    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:11.460477    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:11.473267    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:11.473267    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:14.247697    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:14.247697    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:14.267409    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:14.268065    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:14.268065    6196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:17:14.420587    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:17:14.420649    6196 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:17:14.420649    6196 buildroot.go:174] setting up certificates
	I0604 23:17:14.420649    6196 provision.go:84] configureAuth start
	I0604 23:17:14.420747    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:19.459399    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:19.459399    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:19.472388    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:24.587963    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:24.587963    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:24.587963    6196 provision.go:143] copyHostCerts
	I0604 23:17:24.588863    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:17:24.589122    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:17:24.589259    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:17:24.589783    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:17:24.591006    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:17:24.591006    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:17:24.591006    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:17:24.591641    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:17:24.592249    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:17:24.592777    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:17:24.592777    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:17:24.592848    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:17:24.593990    6196 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000-m02 san=[127.0.0.1 172.20.130.221 localhost minikube multinode-022000-m02]
	I0604 23:17:25.113078    6196 provision.go:177] copyRemoteCerts
	I0604 23:17:25.129161    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:17:25.129161    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:30.376005    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:30.376005    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:30.390275    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:17:30.502796    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3735929s)
	I0604 23:17:30.502900    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:17:30.503336    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:17:30.556821    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:17:30.556939    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0604 23:17:30.609978    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:17:30.610262    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 23:17:30.661820    6196 provision.go:87] duration metric: took 16.2410475s to configureAuth
	I0604 23:17:30.661820    6196 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:17:30.662598    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:17:30.662598    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:33.003698    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:33.003698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:33.017851    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:35.851680    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:35.851680    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:35.871644    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:35.871950    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:35.871950    6196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:17:36.008288    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:17:36.008397    6196 buildroot.go:70] root file system type: tmpfs
	I0604 23:17:36.008529    6196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:17:36.008645    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:38.329374    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:38.332024    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:38.332024    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:41.127630    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:41.127630    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:41.134742    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:41.134742    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:41.135833    6196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.128.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:17:41.299445    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.128.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:17:41.299556    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:43.635240    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:43.635240    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:43.647988    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:46.456880    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:46.469870    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:46.476966    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:46.477091    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:46.477091    6196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 23:17:48.661850    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 23:17:48.661850    6196 machine.go:97] duration metric: took 50.0962966s to provisionDockerMachine
	I0604 23:17:48.661850    6196 client.go:171] duration metric: took 2m6.1795717s to LocalClient.Create
	I0604 23:17:48.661850    6196 start.go:167] duration metric: took 2m6.1795717s to libmachine.API.Create "multinode-022000"
	I0604 23:17:48.661850    6196 start.go:293] postStartSetup for "multinode-022000-m02" (driver="hyperv")
	I0604 23:17:48.661850    6196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 23:17:48.677250    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 23:17:48.677779    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:53.813598    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:53.827297    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:53.827933    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:17:53.939077    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2611451s)
	I0604 23:17:53.960835    6196 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 23:17:53.969287    6196 command_runner.go:130] > NAME=Buildroot
	I0604 23:17:53.969393    6196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0604 23:17:53.969393    6196 command_runner.go:130] > ID=buildroot
	I0604 23:17:53.969393    6196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0604 23:17:53.969495    6196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0604 23:17:53.969552    6196 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 23:17:53.969646    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 23:17:53.969926    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 23:17:53.970613    6196 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 23:17:53.970613    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 23:17:53.984188    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 23:17:54.005090    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 23:17:54.054235    6196 start.go:296] duration metric: took 5.3923431s for postStartSetup
	I0604 23:17:54.056843    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:59.156279    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:59.156279    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:59.169455    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:17:59.171897    6196 start.go:128] duration metric: took 2m16.6955176s to createHost
	I0604 23:17:59.171969    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:01.474791    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:01.485080    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:01.485175    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:04.286163    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:04.297129    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:04.303171    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:18:04.303446    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:18:04.303446    6196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 23:18:04.427048    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717543084.434587741
	
	I0604 23:18:04.427048    6196 fix.go:216] guest clock: 1717543084.434587741
	I0604 23:18:04.427048    6196 fix.go:229] Guest: 2024-06-04 23:18:04.434587741 +0000 UTC Remote: 2024-06-04 23:17:59.1719696 +0000 UTC m=+368.406865501 (delta=5.262618141s)
	I0604 23:18:04.427048    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:06.771882    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:06.777131    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:06.777131    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:09.512387    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:09.524213    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:09.530463    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:18:09.530650    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:18:09.530650    6196 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717543084
	I0604 23:18:09.673682    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:18:04 UTC 2024
	
	I0604 23:18:09.673682    6196 fix.go:236] clock set: Tue Jun  4 23:18:04 UTC 2024
	 (err=<nil>)
	I0604 23:18:09.673682    6196 start.go:83] releasing machines lock for "multinode-022000-m02", held for 2m27.1973314s
	I0604 23:18:09.674306    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:12.073018    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:12.073018    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:12.073921    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:14.927798    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:14.927823    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:14.934065    6196 out.go:177] * Found network options:
	I0604 23:18:14.940257    6196 out.go:177]   - NO_PROXY=172.20.128.97
	W0604 23:18:14.945292    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 23:18:14.948565    6196 out.go:177]   - NO_PROXY=172.20.128.97
	W0604 23:18:14.953225    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 23:18:14.954112    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 23:18:14.957143    6196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 23:18:14.957143    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:14.969282    6196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 23:18:14.969430    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:17.399016    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:17.399096    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:17.399164    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:17.399927    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:17.399993    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:17.400053    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:20.350627    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:20.350853    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:20.351346    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:18:20.382580    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:20.382641    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:20.382641    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:18:20.452598    6196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0604 23:18:20.453553    6196 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4840363s)
	W0604 23:18:20.453553    6196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 23:18:20.466457    6196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 23:18:20.575992    6196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0604 23:18:20.575992    6196 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6188047s)
	I0604 23:18:20.576266    6196 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0604 23:18:20.576266    6196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 23:18:20.576266    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:18:20.576266    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:18:20.613479    6196 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0604 23:18:20.626312    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 23:18:20.666240    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 23:18:20.694545    6196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 23:18:20.708354    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 23:18:20.744349    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:18:20.780748    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 23:18:20.820695    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:18:20.865712    6196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 23:18:20.904861    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 23:18:20.940278    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 23:18:20.976219    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 23:18:21.011790    6196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 23:18:21.037096    6196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0604 23:18:21.050503    6196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 23:18:21.089578    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:21.315850    6196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 23:18:21.352603    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:18:21.364524    6196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 23:18:21.394481    6196 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0604 23:18:21.394530    6196 command_runner.go:130] > [Unit]
	I0604 23:18:21.394530    6196 command_runner.go:130] > Description=Docker Application Container Engine
	I0604 23:18:21.394613    6196 command_runner.go:130] > Documentation=https://docs.docker.com
	I0604 23:18:21.394613    6196 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0604 23:18:21.394613    6196 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0604 23:18:21.394613    6196 command_runner.go:130] > StartLimitBurst=3
	I0604 23:18:21.394689    6196 command_runner.go:130] > StartLimitIntervalSec=60
	I0604 23:18:21.394689    6196 command_runner.go:130] > [Service]
	I0604 23:18:21.394751    6196 command_runner.go:130] > Type=notify
	I0604 23:18:21.394751    6196 command_runner.go:130] > Restart=on-failure
	I0604 23:18:21.394751    6196 command_runner.go:130] > Environment=NO_PROXY=172.20.128.97
	I0604 23:18:21.394751    6196 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0604 23:18:21.394810    6196 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0604 23:18:21.394810    6196 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0604 23:18:21.394810    6196 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0604 23:18:21.394810    6196 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0604 23:18:21.394810    6196 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0604 23:18:21.394810    6196 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0604 23:18:21.394936    6196 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0604 23:18:21.394936    6196 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecStart=
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0604 23:18:21.395084    6196 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0604 23:18:21.395116    6196 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitNOFILE=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitNPROC=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitCORE=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0604 23:18:21.395116    6196 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0604 23:18:21.395116    6196 command_runner.go:130] > TasksMax=infinity
	I0604 23:18:21.395195    6196 command_runner.go:130] > TimeoutStartSec=0
	I0604 23:18:21.395195    6196 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0604 23:18:21.395195    6196 command_runner.go:130] > Delegate=yes
	I0604 23:18:21.395195    6196 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0604 23:18:21.395195    6196 command_runner.go:130] > KillMode=process
	I0604 23:18:21.395262    6196 command_runner.go:130] > [Install]
	I0604 23:18:21.395262    6196 command_runner.go:130] > WantedBy=multi-user.target
	I0604 23:18:21.407378    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:18:21.449881    6196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 23:18:21.492572    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:18:21.532454    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:18:21.577131    6196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 23:18:21.638980    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:18:21.665989    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:18:21.714593    6196 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0604 23:18:21.727109    6196 ssh_runner.go:195] Run: which cri-dockerd
	I0604 23:18:21.734685    6196 command_runner.go:130] > /usr/bin/cri-dockerd
	I0604 23:18:21.747902    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 23:18:21.772238    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 23:18:21.822253    6196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 23:18:22.051123    6196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 23:18:22.266516    6196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 23:18:22.266603    6196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 23:18:22.315702    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:22.542314    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:18:25.133769    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5913256s)
	I0604 23:18:25.147140    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 23:18:25.190347    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:18:25.233116    6196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 23:18:25.457580    6196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 23:18:25.681445    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:25.904637    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 23:18:25.956426    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:18:25.998355    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:26.229598    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 23:18:26.369965    6196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 23:18:26.383946    6196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 23:18:26.395004    6196 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0604 23:18:26.395065    6196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0604 23:18:26.395065    6196 command_runner.go:130] > Device: 0,22	Inode: 896         Links: 1
	I0604 23:18:26.395065    6196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0604 23:18:26.395065    6196 command_runner.go:130] > Access: 2024-06-04 23:18:26.264546037 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] > Modify: 2024-06-04 23:18:26.264546037 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] > Change: 2024-06-04 23:18:26.268546042 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] >  Birth: -
	I0604 23:18:26.395202    6196 start.go:562] Will wait 60s for crictl version
	I0604 23:18:26.410130    6196 ssh_runner.go:195] Run: which crictl
	I0604 23:18:26.417755    6196 command_runner.go:130] > /usr/bin/crictl
	I0604 23:18:26.432373    6196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 23:18:26.494537    6196 command_runner.go:130] > Version:  0.1.0
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeName:  docker
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeVersion:  26.1.3
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0604 23:18:26.495379    6196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 23:18:26.505409    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:18:26.541515    6196 command_runner.go:130] > 26.1.3
	I0604 23:18:26.552369    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:18:26.594387    6196 command_runner.go:130] > 26.1.3
	I0604 23:18:26.599240    6196 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 23:18:26.603977    6196 out.go:177]   - env NO_PROXY=172.20.128.97
	I0604 23:18:26.606107    6196 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 23:18:26.614235    6196 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 23:18:26.614235    6196 ip.go:210] interface addr: 172.20.128.1/20
	I0604 23:18:26.629228    6196 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 23:18:26.635121    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:18:26.665661    6196 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:18:26.666294    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:18:26.666980    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:18:29.076439    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:29.076439    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:29.077158    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:18:29.077937    6196 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000 for IP: 172.20.130.221
	I0604 23:18:29.077937    6196 certs.go:194] generating shared ca certs ...
	I0604 23:18:29.077937    6196 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:18:29.078686    6196 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 23:18:29.079083    6196 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 23:18:29.079348    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 23:18:29.079607    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 23:18:29.079830    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 23:18:29.080240    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 23:18:29.080969    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 23:18:29.082092    6196 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 23:18:29.082336    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 23:18:29.083276    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 23:18:29.083712    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 23:18:29.083712    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 23:18:29.084478    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 23:18:29.085001    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.085193    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.085193    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.085833    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 23:18:29.142329    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 23:18:29.192508    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 23:18:29.245712    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 23:18:29.302632    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 23:18:29.360379    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 23:18:29.414902    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 23:18:29.483600    6196 ssh_runner.go:195] Run: openssl version
	I0604 23:18:29.493599    6196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0604 23:18:29.507152    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 23:18:29.540108    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.548386    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.548386    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.561730    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.572904    6196 command_runner.go:130] > 51391683
	I0604 23:18:29.586256    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 23:18:29.626490    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 23:18:29.662532    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.669536    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.670775    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.686081    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.696479    6196 command_runner.go:130] > 3ec20f2e
	I0604 23:18:29.712433    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 23:18:29.750016    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 23:18:29.785201    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.793961    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.793961    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.807916    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.817895    6196 command_runner.go:130] > b5213941
	I0604 23:18:29.831953    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 23:18:29.871387    6196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 23:18:29.878555    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:18:29.879521    6196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:18:29.879521    6196 kubeadm.go:928] updating node {m02 172.20.130.221 8443 v1.30.1 docker false true} ...
	I0604 23:18:29.879521    6196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-022000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.130.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 23:18:29.892504    6196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 23:18:29.911673    6196 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0604 23:18:29.912682    6196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 23:18:29.925801    6196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0604 23:18:29.948070    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 23:18:29.948070    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 23:18:29.964713    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 23:18:29.964713    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:18:29.965819    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 23:18:29.972550    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 23:18:29.972550    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 23:18:29.972550    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 23:18:30.017966    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 23:18:30.017966    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 23:18:30.018081    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 23:18:30.018081    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 23:18:30.032378    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 23:18:30.091120    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 23:18:30.091199    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 23:18:30.091277    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 23:18:31.463231    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0604 23:18:31.485034    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0604 23:18:31.526100    6196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 23:18:31.589116    6196 ssh_runner.go:195] Run: grep 172.20.128.97	control-plane.minikube.internal$ /etc/hosts
	I0604 23:18:31.597123    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.128.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:18:31.639789    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:31.868188    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:18:31.909260    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:18:31.910204    6196 start.go:316] joinCluster: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:18:31.910356    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 23:18:31.910356    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:18:34.299866    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:34.300208    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:34.300273    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:37.151200    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:18:37.151443    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:37.151930    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:18:37.373477    6196 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:18:37.373640    6196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.4632407s)
	I0604 23:18:37.373785    6196 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:18:37.373826    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-022000-m02"
	I0604 23:18:37.612732    6196 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] Running pre-flight checks
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001311914s
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0604 23:18:39.475816    6196 command_runner.go:130] > This node has joined the cluster:
	I0604 23:18:39.475816    6196 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0604 23:18:39.475816    6196 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0604 23:18:39.475816    6196 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0604 23:18:39.475816    6196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-022000-m02": (2.1019743s)
	I0604 23:18:39.475816    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 23:18:39.719414    6196 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0604 23:18:39.956807    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-022000-m02 minikube.k8s.io/updated_at=2024_06_04T23_18_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=multinode-022000 minikube.k8s.io/primary=false
	I0604 23:18:40.092988    6196 command_runner.go:130] > node/multinode-022000-m02 labeled
	I0604 23:18:40.095529    6196 start.go:318] duration metric: took 8.1851708s to joinCluster
	I0604 23:18:40.095670    6196 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:18:40.098632    6196 out.go:177] * Verifying Kubernetes components...
	I0604 23:18:40.095944    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:18:40.115605    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:40.394248    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:18:40.423538    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:18:40.423709    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:18:40.425022    6196 node_ready.go:35] waiting up to 6m0s for node "multinode-022000-m02" to be "Ready" ...
	I0604 23:18:40.425022    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:40.425022    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:40.425022    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:40.425022    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:40.440074    6196 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 23:18:40.440112    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:40.440150    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:40 GMT
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Audit-Id: 5100e006-4aa5-496c-9351-ca800abc3e02
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:40.440200    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:40.440200    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:40.440231    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:40.928672    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:40.928672    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:40.928672    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:40.928672    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:40.932487    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:40.932487    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:40.932487    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:40.932487    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:40 GMT
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Audit-Id: 06726864-d3c1-417b-bf85-631bfe75e809
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:40.933437    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:40.933509    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:41.428300    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:41.428523    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:41.428523    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:41.428523    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:41.432963    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:41.432963    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:41 GMT
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Audit-Id: ae86d989-62ae-4307-9b57-9d2113e4ced5
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:41.432963    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:41.432963    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:41.433795    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:41.931415    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:41.931479    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:41.931479    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:41.931479    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:41.937113    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:41.937113    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:41.937113    6196 round_trippers.go:580]     Audit-Id: 03c7fee7-a73c-4d45-9c2d-6742e4e7cd20
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:41.937464    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:41.937464    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:41 GMT
	I0604 23:18:41.937646    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:42.430803    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:42.430803    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:42.430803    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:42.430803    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:42.486727    6196 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I0604 23:18:42.487522    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Audit-Id: 76bfa14b-bdac-4ff8-91c6-c8b70b936a0e
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:42.487522    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:42.487607    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:42.487607    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:42.487689    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:42 GMT
	I0604 23:18:42.487901    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:42.488039    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:42.930786    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:42.930786    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:42.930786    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:42.930786    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:42.943848    6196 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0604 23:18:42.944125    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:42 GMT
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Audit-Id: bf4e37ca-52b7-4e86-8b2f-272497233ed5
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:42.944125    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:42.944125    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:42.944338    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:43.434470    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:43.434682    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:43.434682    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:43.434682    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:43.439239    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:43.439462    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:43.439462    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:43 GMT
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Audit-Id: 395a0df6-fa51-4daf-908a-77466b46fd37
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:43.439462    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:43.439676    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:43.937069    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:43.937365    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:43.937365    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:43.937365    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:43.941933    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:43.941933    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:43.941933    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:43.942016    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:43 GMT
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Audit-Id: 724f516f-22d8-404e-bc38-5ffb98dcded8
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:43.942097    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.440293    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:44.440293    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:44.440293    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:44.440293    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:44.445992    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:44.445992    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Audit-Id: c1abadcd-08f4-4b93-9b3b-4fb221f1e69e
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:44.445992    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:44.445992    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:44 GMT
	I0604 23:18:44.445992    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.928452    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:44.928452    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:44.928452    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:44.928539    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:44.932194    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:44.932383    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:44.932383    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:44 GMT
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Audit-Id: c5953675-a4ae-4a93-9ae7-52e08701c42d
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:44.932383    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:44.932610    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.932610    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:45.433521    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:45.433588    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:45.433588    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:45.433588    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:45.437234    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:45.437915    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:45.437915    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:45.437915    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:45 GMT
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Audit-Id: f67d6c88-ca8a-4e9e-a575-64651d85c1df
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:45.438123    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:45.940249    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:45.940347    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:45.940347    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:45.940379    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:45.945137    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:45.953475    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:45.953475    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:45 GMT
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Audit-Id: ca29d6e6-f2a5-4a53-ae74-f7d7b7363d79
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:45.953475    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:45.953475    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.436131    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:46.436131    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:46.436131    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:46.436131    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:46.443161    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:18:46.443161    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:46.443161    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:46 GMT
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Audit-Id: 9542c593-372e-49e4-85ea-fab9b7009141
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:46.443161    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:46.443161    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.928195    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:46.928298    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:46.928298    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:46.928368    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:46.932162    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:46.932162    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:46 GMT
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Audit-Id: e9c5a291-4271-44ce-8bf9-4a3c86805a39
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:46.932962    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:46.932962    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:46.932962    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:46.933008    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.933008    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:47.432705    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:47.432760    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:47.432760    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:47.432760    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:47.436553    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:47.436553    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:47.436553    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:47 GMT
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Audit-Id: 024f13e1-cfbc-420c-93af-a4f8e36eb762
	I0604 23:18:47.436927    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:47.436927    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:47.436927    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:47.437041    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:47.941440    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:47.941440    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:47.941510    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:47.941510    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:47.945940    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:47.945940    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:47.945940    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:47.945940    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:47 GMT
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Audit-Id: d4b714e6-c599-4ecf-87b9-d1ebb32e512c
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:47.945940    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:48.428648    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:48.428815    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:48.428815    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:48.428815    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:48.436692    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:18:48.436692    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:48 GMT
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Audit-Id: 3cb53ff6-a8e7-4b65-90af-df1340304e05
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:48.436692    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:48.436692    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:48.436692    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:48.932583    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:48.932658    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:48.932658    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:48.932658    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.049523    6196 round_trippers.go:574] Response Status: 200 OK in 116 milliseconds
	I0604 23:18:49.050025    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.050025    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.050025    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Audit-Id: 677ac292-f44c-4be8-85fd-f4826a28e42d
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.050328    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:49.050609    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:49.436103    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:49.436103    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:49.436103    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:49.436103    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.442870    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:49.442870    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.442870    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.442870    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Audit-Id: 6769e4ff-6509-4644-8993-456713c265a1
	I0604 23:18:49.442870    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:49.929635    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:49.929635    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:49.929699    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:49.929699    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.934199    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:49.934199    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.934199    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.934199    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.934199    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Audit-Id: 8cbd9b58-8020-4cd0-90a3-66a819e361d1
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.934598    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:50.434871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:50.434985    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:50.434985    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:50.434985    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:50.438923    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:50.438923    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:50.438923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:50.438923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:50.438923    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:50 GMT
	I0604 23:18:50.438923    6196 round_trippers.go:580]     Audit-Id: c32efb0a-2528-486b-ae05-1ecd8093e86b
	I0604 23:18:50.439251    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:50.439251    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:50.439635    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:50.926433    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:50.926433    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:50.926433    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:50.926433    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:50.932561    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:50.932785    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:50.932785    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:50.932785    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:50 GMT
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Audit-Id: ecd6171d-f30e-46ca-b8a0-8c5e8afad934
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:50.933214    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:51.431733    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:51.431733    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:51.431733    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:51.432046    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:51.435366    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:51.435366    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:51.436347    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:51 GMT
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Audit-Id: a020ffc4-e555-4f3d-82c0-4dd9ade9fd89
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:51.436373    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:51.436737    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:51.437167    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:51.939777    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:51.939777    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:51.939777    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:51.939777    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:51.942952    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:51.943946    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:51.943946    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:51.943946    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:51.943946    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:51.943946    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:51 GMT
	I0604 23:18:51.944059    6196 round_trippers.go:580]     Audit-Id: 37a04768-8e8c-49b9-8408-c43a74c9f12a
	I0604 23:18:51.944059    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:51.944361    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:52.437616    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:52.437826    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:52.437826    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:52.437826    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:52.441618    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:52.441618    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:52 GMT
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Audit-Id: 9cc2fc38-7f00-488e-925a-a29cc361de72
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:52.441618    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:52.441618    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:52.441618    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:52.926617    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:52.926617    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:52.926778    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:52.926778    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:52.931194    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:52.931194    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Audit-Id: eaee1025-11b3-47f9-b882-1524ea05d59c
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:52.931194    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:52.931194    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:52 GMT
	I0604 23:18:52.931818    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:53.432137    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:53.432174    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:53.432235    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:53.432235    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:53.444948    6196 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 23:18:53.444948    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Audit-Id: 09e4440d-685d-48b7-a03e-45ac64e6840d
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:53.444948    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:53.444948    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:53 GMT
	I0604 23:18:53.444948    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:53.445837    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:53.940177    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:53.940177    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:53.940289    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:53.940289    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:53.946847    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:53.946898    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Audit-Id: 9c20d688-f6c4-401c-b82b-02f47717309c
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:53.946898    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:53.946898    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:53 GMT
	I0604 23:18:53.947075    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:54.425608    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:54.425802    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:54.425802    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:54.425802    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:54.591267    6196 round_trippers.go:574] Response Status: 200 OK in 165 milliseconds
	I0604 23:18:54.591910    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Audit-Id: 5f5ecd85-71e6-445c-bc74-839937c8c29d
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:54.591910    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:54.591910    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:54 GMT
	I0604 23:18:54.592190    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:54.938097    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:54.938339    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:54.938339    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:54.938339    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:54.942145    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:54.943103    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Audit-Id: 0fbc75d3-3b56-4b05-ac9e-092fb80fd764
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:54.943144    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:54.943144    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:54 GMT
	I0604 23:18:54.943439    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.440130    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:55.440257    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:55.440325    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:55.440325    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:55.443789    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:55.443789    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:55.443789    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:55 GMT
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Audit-Id: b819d914-46b5-4997-b57a-b416699df946
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:55.444795    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:55.444836    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:55.445214    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.940259    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:55.940489    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:55.940489    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:55.940579    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:55.943889    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:55.949610    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Audit-Id: 3c5f496c-89a0-4eac-b8fe-6a7658d50b11
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:55.949610    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:55.949610    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:55 GMT
	I0604 23:18:55.950651    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.950651    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:56.440682    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:56.440682    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.440745    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.440745    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.453178    6196 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 23:18:56.453178    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.453178    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Audit-Id: d0e195d2-de75-4e31-87da-b96f29b0ce2e
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.453178    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.453178    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"627","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0604 23:18:56.454177    6196 node_ready.go:49] node "multinode-022000-m02" has status "Ready":"True"
	I0604 23:18:56.454177    6196 node_ready.go:38] duration metric: took 16.0290284s for node "multinode-022000-m02" to be "Ready" ...
	I0604 23:18:56.454177    6196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:18:56.454177    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:18:56.454177    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.454177    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.454177    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.460177    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:56.460177    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.460177    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.460177    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Audit-Id: f1ff896b-4076-4c99-8363-a0f085b11b3d
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.461109    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.462565    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"629"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70438 chars]
	I0604 23:18:56.466327    6196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.467036    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:18:56.467036    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.467036    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.467036    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.471615    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:56.471758    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.471758    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.471758    6196 round_trippers.go:580]     Audit-Id: 2603eac6-f83f-406e-b52e-8eb2d57db2ef
	I0604 23:18:56.471850    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.471850    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.471850    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.471850    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.472040    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0604 23:18:56.472173    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.472708    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.472708    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.472708    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.479049    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:56.479049    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.479049    6196 round_trippers.go:580]     Audit-Id: 13f4aafc-3e98-4990-bcf1-bfab4a0a1cfc
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.479212    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.479212    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.479409    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.479952    6196 pod_ready.go:92] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.480014    6196 pod_ready.go:81] duration metric: took 13.1118ms for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.480014    6196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.480137    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-022000
	I0604 23:18:56.480137    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.480193    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.480193    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.482841    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.482841    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.482841    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.482841    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Audit-Id: 8bc91cd3-1781-485e-b29f-921562230dcc
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.483501    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022000","namespace":"kube-system","uid":"cf5ce7db-ab12-4be8-9e44-317caab1adeb","resourceVersion":"386","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.128.97:2379","kubernetes.io/config.hash":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.mirror":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.seen":"2024-06-04T23:15:11.311330236Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0604 23:18:56.483501    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.483501    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.483501    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.483501    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.486497    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.486497    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.486497    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.486497    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Audit-Id: 90a9a8f2-4794-4ccb-a03d-4aeabe98e4a4
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.486497    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.487484    6196 pod_ready.go:92] pod "etcd-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.487484    6196 pod_ready.go:81] duration metric: took 7.4703ms for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.487484    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.487484    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022000
	I0604 23:18:56.487484    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.487484    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.487484    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.490594    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.490620    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.490620    6196 round_trippers.go:580]     Audit-Id: 8122a3e9-9a8b-4601-80ac-4dab24708f75
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.490701    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.490701    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.491192    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022000","namespace":"kube-system","uid":"a15ca283-cf36-4ce5-846a-37257524e217","resourceVersion":"385","creationTimestamp":"2024-06-04T23:15:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.128.97:8443","kubernetes.io/config.hash":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.mirror":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.seen":"2024-06-04T23:15:02.371587958Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0604 23:18:56.491497    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.491497    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.491497    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.491497    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.499516    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:18:56.499977    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.499977    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.499977    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.500088    6196 round_trippers.go:580]     Audit-Id: 07252805-fec1-4049-b6fc-6f0779e2753b
	I0604 23:18:56.500286    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.500286    6196 pod_ready.go:92] pod "kube-apiserver-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.500286    6196 pod_ready.go:81] duration metric: took 12.8015ms for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.500286    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.500286    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022000
	I0604 23:18:56.500286    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.500286    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.500286    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.516906    6196 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 23:18:56.516906    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Audit-Id: 3729ee0a-14f8-4804-85ed-c0b86bf10d5a
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.516906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.516906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.516906    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022000","namespace":"kube-system","uid":"2bb46405-19fa-4ca8-afd5-6d6224271444","resourceVersion":"382","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.mirror":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.seen":"2024-06-04T23:15:11.311327436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0604 23:18:56.516906    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.516906    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.516906    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.516906    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.524925    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:18:56.524925    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.524925    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Audit-Id: 6c9ec9d2-1dea-4346-984f-1ab0d7ab3638
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.524925    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.525726    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.526327    6196 pod_ready.go:92] pod "kube-controller-manager-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.526327    6196 pod_ready.go:81] duration metric: took 26.0408ms for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.526403    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.650008    6196 request.go:629] Waited for 123.5448ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:18:56.650008    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:18:56.650008    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.650008    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.650008    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.653364    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:56.653364    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Audit-Id: c3c0e957-6964-48f9-b5f3-004c994db1ad
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.653364    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.653364    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.653887    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.653995    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pbmpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be","resourceVersion":"378","creationTimestamp":"2024-06-04T23:15:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0604 23:18:56.852344    6196 request.go:629] Waited for 198.3476ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.852667    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.852726    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.852726    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.852726    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.856115    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:56.856115    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Audit-Id: 232800f6-927c-4c12-8811-7fa2efc7c85d
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.856115    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.856115    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.857493    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.858252    6196 pod_ready.go:92] pod "kube-proxy-pbmpr" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.858252    6196 pod_ready.go:81] duration metric: took 331.8467ms for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.858252    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xb6b5" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.040806    6196 request.go:629] Waited for 182.5528ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xb6b5
	I0604 23:18:57.040940    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xb6b5
	I0604 23:18:57.041192    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.041192    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.041192    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.045371    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.045371    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Audit-Id: ff909036-4d06-4c7b-bf1d-0cbe07a4c5c8
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.045445    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.045445    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.045445    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.045618    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xb6b5","generateName":"kube-proxy-","namespace":"kube-system","uid":"32c32f53-0cf7-4236-a187-8975de272f62","resourceVersion":"615","creationTimestamp":"2024-06-04T23:18:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0604 23:18:57.243192    6196 request.go:629] Waited for 196.427ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:57.243315    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:57.243315    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.243315    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.243315    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.246909    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.246909    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.246909    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.246909    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.247838    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Audit-Id: a85eee93-bdf0-4e91-840c-5aa626488f54
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.248120    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"627","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0604 23:18:57.248308    6196 pod_ready.go:92] pod "kube-proxy-xb6b5" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:57.248308    6196 pod_ready.go:81] duration metric: took 390.0526ms for pod "kube-proxy-xb6b5" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.248308    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.446621    6196 request.go:629] Waited for 198.1022ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:18:57.446729    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:18:57.446729    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.446913    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.446995    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.451599    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:57.451810    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.451810    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.451810    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Audit-Id: 91aa8dc1-280a-4c34-b7c1-43a6dd9aed33
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.452219    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022000","namespace":"kube-system","uid":"0453fac4-fec2-4a1f-80f7-c3192dae4ea5","resourceVersion":"384","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.mirror":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.seen":"2024-06-04T23:15:11.311328836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0604 23:18:57.648851    6196 request.go:629] Waited for 195.8094ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:57.649151    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:57.649151    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.649151    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.649151    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.652538    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.652538    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.652538    6196 round_trippers.go:580]     Audit-Id: 889c65f7-0502-4d93-92ab-ac9c02921fda
	I0604 23:18:57.652538    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.653187    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.653187    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.653187    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.653187    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.653379    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:57.654035    6196 pod_ready.go:92] pod "kube-scheduler-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:57.654035    6196 pod_ready.go:81] duration metric: took 405.7242ms for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.654188    6196 pod_ready.go:38] duration metric: took 1.200001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:18:57.654275    6196 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 23:18:57.666933    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:18:57.694284    6196 system_svc.go:56] duration metric: took 40.009ms WaitForService to wait for kubelet
	I0604 23:18:57.694284    6196 kubeadm.go:576] duration metric: took 17.5984755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:18:57.694284    6196 node_conditions.go:102] verifying NodePressure condition ...
	I0604 23:18:57.850811    6196 request.go:629] Waited for 156.1825ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes
	I0604 23:18:57.850811    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes
	I0604 23:18:57.850811    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.850811    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.850811    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.855922    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:57.855922    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.855922    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.855922    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Audit-Id: 751e6b09-6d7a-4d60-a391-71a8e93b1249
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.855922    6196 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9268 chars]
	I0604 23:18:57.857152    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:18:57.857152    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:18:57.857228    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:18:57.857228    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:18:57.857228    6196 node_conditions.go:105] duration metric: took 162.9424ms to run NodePressure ...
	I0604 23:18:57.857228    6196 start.go:240] waiting for startup goroutines ...
	I0604 23:18:57.857228    6196 start.go:254] writing updated cluster config ...
	I0604 23:18:57.869579    6196 ssh_runner.go:195] Run: rm -f paused
	I0604 23:18:58.022987    6196 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 23:18:58.028415    6196 out.go:177] * Done! kubectl is now configured to use "multinode-022000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.359878895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.397798470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.398155973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.398586576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.399143580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:15:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/379b62cc1d5a78dc5bbb257d47fd30daf0919acb923a8206677fe47cdf98ea02/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 23:15:39 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:15:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/675bb5a4c04a1fe4f3ea5cefc8df63710e47609fad5295e49311f60534b83464/resolv.conf as [nameserver 172.20.128.1]"
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.842369051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.843831161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.843933861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.844216663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.965070683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.965906588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.966104390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.966757594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.937682296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938159801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938237701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938740806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:26 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:19:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0954c9343f31c4946bfd429a1ad215a82da95e5ae2afdb88166571b5af0adf05/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 04 23:19:27 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:19:27Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.797212457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798264368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798390669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798930874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	411a3919d8cac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   51 seconds ago      Running             busybox                   0                   0954c9343f31c       busybox-fc5497c4f-8bcjx
	03f3b4de24580       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   675bb5a4c04a1       coredns-7db6d8ff4d-mlh9s
	2dba3a07a5a2f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   379b62cc1d5a7       storage-provisioner
	3df3de0da4c3c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   4b233b76fa4c7       kindnet-s279j
	e160006e01953       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   1a26f50a38b56       kube-proxy-pbmpr
	e7a691c4c711b       a52dc94f0a912                                                                                         5 minutes ago       Running             kube-scheduler            0                   09d03e1ab4e31       kube-scheduler-multinode-022000
	6fa66b9502ad4       25a1387cdab82                                                                                         5 minutes ago       Running             kube-controller-manager   0                   3d4cf95b0d999       kube-controller-manager-multinode-022000
	05c914a510d03       91be940803172                                                                                         5 minutes ago       Running             kube-apiserver            0                   3f588a7c5b099       kube-apiserver-multinode-022000
	8b3adda489455       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   51f3d3843b646       etcd-multinode-022000
	
	
	==> coredns [03f3b4de2458] <==
	[INFO] 10.244.0.3:35762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000244603s
	[INFO] 10.244.1.2:41972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131101s
	[INFO] 10.244.1.2:44208 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000138602s
	[INFO] 10.244.1.2:41832 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278203s
	[INFO] 10.244.1.2:41631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165302s
	[INFO] 10.244.1.2:59710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069001s
	[INFO] 10.244.1.2:43987 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000622s
	[INFO] 10.244.1.2:49013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077701s
	[INFO] 10.244.1.2:60219 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173902s
	[INFO] 10.244.0.3:35268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186701s
	[INFO] 10.244.0.3:45568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000608s
	[INFO] 10.244.0.3:41299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113002s
	[INFO] 10.244.0.3:59664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224903s
	[INFO] 10.244.1.2:59289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105601s
	[INFO] 10.244.1.2:38478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110401s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214302s
	[INFO] 10.244.1.2:39440 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056401s
	[INFO] 10.244.0.3:38326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260803s
	[INFO] 10.244.0.3:37752 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227802s
	[INFO] 10.244.0.3:59155 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166002s
	[INFO] 10.244.0.3:34407 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186702s
	[INFO] 10.244.1.2:40850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151301s
	[INFO] 10.244.1.2:55108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114201s
	[INFO] 10.244.1.2:44936 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000624s
	[INFO] 10.244.1.2:51542 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095901s
	
	
	==> describe nodes <==
	Name:               multinode-022000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=multinode-022000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T23_15_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 23:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 23:20:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 23:19:47 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 23:19:47 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 23:19:47 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 23:19:47 +0000   Tue, 04 Jun 2024 23:15:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.128.97
	  Hostname:    multinode-022000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d6c5809fb3440069dd9b4ef8addbc3e
	  System UUID:                4c5c03cf-a4e2-8c42-8f91-37d86e19cfc3
	  Boot ID:                    edaf61b4-2d1c-46eb-84d1-21d1359cb7e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8bcjx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 coredns-7db6d8ff4d-mlh9s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m54s
	  kube-system                 etcd-multinode-022000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-s279j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-multinode-022000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-controller-manager-multinode-022000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-proxy-pbmpr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-multinode-022000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m52s  kube-proxy       
	  Normal  Starting                 5m8s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m8s   kubelet          Node multinode-022000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s   kubelet          Node multinode-022000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s   kubelet          Node multinode-022000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s  node-controller  Node multinode-022000 event: Registered Node multinode-022000 in Controller
	  Normal  NodeReady                4m41s  kubelet          Node multinode-022000 status is now: NodeReady
	
	
	Name:               multinode-022000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=multinode-022000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T23_18_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 23:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 23:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 23:19:40 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 23:19:40 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 23:19:40 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 23:19:40 +0000   Tue, 04 Jun 2024 23:18:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.130.221
	  Hostname:    multinode-022000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd6919e87c8e4561b186751987589023
	  System UUID:                2246aa18-3838-a94d-a4f8-a3805e5cd9b5
	  Boot ID:                    58a26601-1670-47eb-a478-fb94fa292d33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cbgjv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kindnet-4rf65              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      100s
	  kube-system                 kube-proxy-xb6b5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x2 over 101s)  kubelet          Node multinode-022000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x2 over 101s)  kubelet          Node multinode-022000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x2 over 101s)  kubelet          Node multinode-022000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-022000-m02 event: Registered Node multinode-022000-m02 in Controller
	  Normal  NodeReady                83s                  kubelet          Node multinode-022000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +53.334138] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.186580] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Jun 4 23:14] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.111896] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.649522] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.231682] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.248826] systemd-fstab-generator[1019]: Ignoring "noauto" option for root device
	[  +2.853059] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.210042] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.219250] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.305198] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[ +11.780357] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.115595] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.283147] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[Jun 4 23:15] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.103471] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.589942] systemd-fstab-generator[2139]: Ignoring "noauto" option for root device
	[  +0.153452] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.869630] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +0.187226] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.014750] kauditd_printk_skb: 51 callbacks suppressed
	[Jun 4 23:19] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.073718] hrtimer: interrupt took 2176522 ns
	
	
	==> etcd [8b3adda48945] <==
	{"level":"info","ts":"2024-06-04T23:15:04.728105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86784d5939b76b84 received MsgVoteResp from 86784d5939b76b84 at term 2"}
	{"level":"info","ts":"2024-06-04T23:15:04.72827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86784d5939b76b84 became leader at term 2"}
	{"level":"info","ts":"2024-06-04T23:15:04.728504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86784d5939b76b84 elected leader 86784d5939b76b84 at term 2"}
	{"level":"info","ts":"2024-06-04T23:15:04.740229Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86784d5939b76b84","local-member-attributes":"{Name:multinode-022000 ClientURLs:[https://172.20.128.97:2379]}","request-path":"/0/members/86784d5939b76b84/attributes","cluster-id":"45eec04bab277b8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-04T23:15:04.740743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-04T23:15:04.741066Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-04T23:15:04.741104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-04T23:15:04.746602Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-04T23:15:04.741263Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T23:15:04.752999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.128.97:2379"}
	{"level":"info","ts":"2024-06-04T23:15:04.76649Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"45eec04bab277b8","local-member-id":"86784d5939b76b84","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T23:15:04.770198Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T23:15:04.770513Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-04T23:15:04.78028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-04T23:15:32.177681Z","caller":"traceutil/trace.go:171","msg":"trace[1735885837] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"365.493935ms","start":"2024-06-04T23:15:31.812172Z","end":"2024-06-04T23:15:32.177666Z","steps":["trace[1735885837] 'process raft request'  (duration: 365.391234ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:15:32.178609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-04T23:15:31.812156Z","time spent":"365.826537ms","remote":"127.0.0.1:58688","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6345,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-multinode-022000\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-022000\" value_size:6270 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-multinode-022000\" > >"}
	{"level":"info","ts":"2024-06-04T23:15:32.228143Z","caller":"traceutil/trace.go:171","msg":"trace[1721186031] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"191.847392ms","start":"2024-06-04T23:15:32.036277Z","end":"2024-06-04T23:15:32.228125Z","steps":["trace[1721186031] 'process raft request'  (duration: 191.376689ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:18:32.153325Z","caller":"traceutil/trace.go:171","msg":"trace[1599411460] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"245.204814ms","start":"2024-06-04T23:18:31.908099Z","end":"2024-06-04T23:18:32.153304Z","steps":["trace[1599411460] 'process raft request'  (duration: 245.079013ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:18:42.803512Z","caller":"traceutil/trace.go:171","msg":"trace[1170162956] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"154.116142ms","start":"2024-06-04T23:18:42.649376Z","end":"2024-06-04T23:18:42.803492Z","steps":["trace[1170162956] 'process raft request'  (duration: 153.99664ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:18:49.050562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.239708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-022000-m02\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-06-04T23:18:49.050913Z","caller":"traceutil/trace.go:171","msg":"trace[1685928890] range","detail":"{range_begin:/registry/minions/multinode-022000-m02; range_end:; response_count:1; response_revision:609; }","duration":"112.621712ms","start":"2024-06-04T23:18:48.938275Z","end":"2024-06-04T23:18:49.050897Z","steps":["trace[1685928890] 'range keys from in-memory index tree'  (duration: 112.124707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:18:54.590637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.75205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-022000-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-04T23:18:54.591449Z","caller":"traceutil/trace.go:171","msg":"trace[1307728064] range","detail":"{range_begin:/registry/minions/multinode-022000-m02; range_end:; response_count:1; response_revision:621; }","duration":"159.601558ms","start":"2024-06-04T23:18:54.431834Z","end":"2024-06-04T23:18:54.591435Z","steps":["trace[1307728064] 'range keys from in-memory index tree'  (duration: 157.181734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:18:54.591102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.743811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-04T23:18:54.592082Z","caller":"traceutil/trace.go:171","msg":"trace[1723561822] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:621; }","duration":"114.78262ms","start":"2024-06-04T23:18:54.477285Z","end":"2024-06-04T23:18:54.592068Z","steps":["trace[1723561822] 'range keys from in-memory index tree'  (duration: 113.65311ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:20:19 up 7 min,  0 users,  load average: 0.10, 0.29, 0.17
	Linux multinode-022000 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3df3de0da4c3] <==
	I0604 23:19:14.191457       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:19:24.205386       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:19:24.206047       1 main.go:227] handling current node
	I0604 23:19:24.206429       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:19:24.206716       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:19:34.216503       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:19:34.216603       1 main.go:227] handling current node
	I0604 23:19:34.216621       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:19:34.216629       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:19:44.230004       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:19:44.230035       1 main.go:227] handling current node
	I0604 23:19:44.230047       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:19:44.230052       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:19:54.245192       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:19:54.245360       1 main.go:227] handling current node
	I0604 23:19:54.245378       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:19:54.245386       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:20:04.256120       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:20:04.256351       1 main.go:227] handling current node
	I0604 23:20:04.256392       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:20:04.256418       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:20:14.266176       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:20:14.266290       1 main.go:227] handling current node
	I0604 23:20:14.266307       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:20:14.266315       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [05c914a510d0] <==
	I0604 23:15:08.713518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0604 23:15:08.722789       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0604 23:15:08.722829       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0604 23:15:09.947302       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0604 23:15:10.035350       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0604 23:15:10.232813       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0604 23:15:10.262496       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.128.97]
	I0604 23:15:10.264235       1 controller.go:615] quota admission added evaluator for: endpoints
	I0604 23:15:10.273648       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0604 23:15:10.773786       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0604 23:15:11.242066       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0604 23:15:11.305800       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0604 23:15:11.355040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0604 23:15:24.649565       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0604 23:15:24.729903       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0604 23:19:31.319940       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64510: use of closed network connection
	E0604 23:19:31.895448       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64513: use of closed network connection
	E0604 23:19:32.540474       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64515: use of closed network connection
	E0604 23:19:33.125452       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64517: use of closed network connection
	E0604 23:19:33.672858       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64519: use of closed network connection
	E0604 23:19:34.212422       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64521: use of closed network connection
	E0604 23:19:35.192885       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64524: use of closed network connection
	E0604 23:19:45.762907       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64526: use of closed network connection
	E0604 23:19:46.298718       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64529: use of closed network connection
	E0604 23:19:56.829563       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64531: use of closed network connection
	
	
	==> kube-controller-manager [6fa66b9502ad] <==
	I0604 23:15:24.727488       1 shared_informer.go:320] Caches are synced for garbage collector
	I0604 23:15:24.727512       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0604 23:15:25.311802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="651.701783ms"
	I0604 23:15:25.460079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="148.208779ms"
	I0604 23:15:25.460467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.2µs"
	I0604 23:15:25.870748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="122.955363ms"
	I0604 23:15:25.922585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.765637ms"
	I0604 23:15:25.950785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.111929ms"
	I0604 23:15:25.950913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89µs"
	I0604 23:15:38.684348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.6µs"
	I0604 23:15:38.721437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="107.901µs"
	I0604 23:15:38.979188       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0604 23:15:40.789513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.879262ms"
	I0604 23:15:40.790881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.803µs"
	I0604 23:18:38.991339       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-022000-m02\" does not exist"
	I0604 23:18:39.010747       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-022000-m02" podCIDRs=["10.244.1.0/24"]
	I0604 23:18:39.013790       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-022000-m02"
	I0604 23:18:56.388011       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-022000-m02"
	I0604 23:19:25.341611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.243329ms"
	I0604 23:19:25.403275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.596173ms"
	I0604 23:19:25.403620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="270.603µs"
	I0604 23:19:28.246774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.136603ms"
	I0604 23:19:28.247075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0604 23:19:28.687177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.475206ms"
	I0604 23:19:28.687294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.001µs"
	
	
	==> kube-proxy [e160006e0195] <==
	I0604 23:15:26.334501       1 server_linux.go:69] "Using iptables proxy"
	I0604 23:15:26.351604       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.128.97"]
	I0604 23:15:26.408095       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 23:15:26.408614       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 23:15:26.408639       1 server_linux.go:165] "Using iptables Proxier"
	I0604 23:15:26.416392       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 23:15:26.417197       1 server.go:872] "Version info" version="v1.30.1"
	I0604 23:15:26.417299       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 23:15:26.419396       1 config.go:192] "Starting service config controller"
	I0604 23:15:26.419605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 23:15:26.419645       1 config.go:101] "Starting endpoint slice config controller"
	I0604 23:15:26.420505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 23:15:26.421709       1 config.go:319] "Starting node config controller"
	I0604 23:15:26.421746       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 23:15:26.519983       1 shared_informer.go:320] Caches are synced for service config
	I0604 23:15:26.521427       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0604 23:15:26.522325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e7a691c4c711] <==
	W0604 23:15:08.993825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:08.993898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.019374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0604 23:15:09.019687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0604 23:15:09.043741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 23:15:09.043806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 23:15:09.096122       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0604 23:15:09.096183       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0604 23:15:09.109897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0604 23:15:09.110353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0604 23:15:09.116456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:09.116572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.140631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0604 23:15:09.140863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0604 23:15:09.158729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:09.158901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.159080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0604 23:15:09.159123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0604 23:15:09.254117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0604 23:15:09.254175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0604 23:15:09.267345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0604 23:15:09.267593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0604 23:15:09.294580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0604 23:15:09.294643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0604 23:15:12.023143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 04 23:16:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:17:11 multinode-022000 kubelet[2146]: E0604 23:17:11.446639    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:17:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:17:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:17:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:17:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:18:11 multinode-022000 kubelet[2146]: E0604 23:18:11.447350    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:18:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:18:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:18:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:18:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:19:11 multinode-022000 kubelet[2146]: E0604 23:19:11.450618    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:19:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:19:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:19:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:19:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:19:25 multinode-022000 kubelet[2146]: I0604 23:19:25.315505    2146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mlh9s" podStartSLOduration=240.315476101 podStartE2EDuration="4m0.315476101s" podCreationTimestamp="2024-06-04 23:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-04 23:15:40.762581581 +0000 UTC m=+29.618948308" watchObservedRunningTime="2024-06-04 23:19:25.315476101 +0000 UTC m=+254.171842928"
	Jun 04 23:19:25 multinode-022000 kubelet[2146]: I0604 23:19:25.316151    2146 topology_manager.go:215] "Topology Admit Handler" podUID="42c59041-ba7d-4f44-8dfa-73f166ae9f5d" podNamespace="default" podName="busybox-fc5497c4f-8bcjx"
	Jun 04 23:19:25 multinode-022000 kubelet[2146]: I0604 23:19:25.493877    2146 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9wv\" (UniqueName: \"kubernetes.io/projected/42c59041-ba7d-4f44-8dfa-73f166ae9f5d-kube-api-access-pk9wv\") pod \"busybox-fc5497c4f-8bcjx\" (UID: \"42c59041-ba7d-4f44-8dfa-73f166ae9f5d\") " pod="default/busybox-fc5497c4f-8bcjx"
	Jun 04 23:19:26 multinode-022000 kubelet[2146]: I0604 23:19:26.172207    2146 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0954c9343f31c4946bfd429a1ad215a82da95e5ae2afdb88166571b5af0adf05"
	Jun 04 23:20:11 multinode-022000 kubelet[2146]: E0604 23:20:11.448269    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:20:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:20:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:20:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:20:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:20:10.378350    8012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-022000 -n multinode-022000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-022000 -n multinode-022000: (13.4670784s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-022000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (60.63s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (295.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 node start m03 -v=7 --alsologtostderr
E0604 23:33:17.018097   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 23:34:48.876385   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-022000 node start m03 -v=7 --alsologtostderr: exit status 90 (3m1.3245206s)

                                                
                                                
-- stdout --
	* Starting "multinode-022000-m03" worker node in "multinode-022000" cluster
	* Restarting existing hyperv VM for "multinode-022000-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:32:50.656328    4896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 23:32:50.751695    4896 out.go:291] Setting OutFile to fd 1124 ...
	I0604 23:32:50.763550    4896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:32:50.763550    4896 out.go:304] Setting ErrFile to fd 876...
	I0604 23:32:50.763550    4896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:32:50.794991    4896 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:32:50.795766    4896 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:32:50.797132    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:32:53.096023    4896 main.go:141] libmachine: [stdout =====>] : Off
	
	I0604 23:32:53.096023    4896 main.go:141] libmachine: [stderr =====>] : 
	W0604 23:32:53.096023    4896 host.go:58] "multinode-022000-m03" host status: Stopped
	I0604 23:32:53.100599    4896 out.go:177] * Starting "multinode-022000-m03" worker node in "multinode-022000" cluster
	I0604 23:32:53.103197    4896 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:32:53.103331    4896 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 23:32:53.103331    4896 cache.go:56] Caching tarball of preloaded images
	I0604 23:32:53.103860    4896 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:32:53.104122    4896 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:32:53.104122    4896 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:32:53.106701    4896 start.go:360] acquireMachinesLock for multinode-022000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:32:53.107130    4896 start.go:364] duration metric: took 372.3µs to acquireMachinesLock for "multinode-022000-m03"
	I0604 23:32:53.107130    4896 start.go:96] Skipping create...Using existing machine configuration
	I0604 23:32:53.107330    4896 fix.go:54] fixHost starting: m03
	I0604 23:32:53.107454    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:32:55.401236    4896 main.go:141] libmachine: [stdout =====>] : Off
	
	I0604 23:32:55.401236    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:55.410288    4896 fix.go:112] recreateIfNeeded on multinode-022000-m03: state=Stopped err=<nil>
	W0604 23:32:55.410288    4896 fix.go:138] unexpected machine state, will restart: <nil>
	I0604 23:32:55.413589    4896 out.go:177] * Restarting existing hyperv VM for "multinode-022000-m03" ...
	I0604 23:32:55.416304    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000-m03
	I0604 23:32:58.724730    4896 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:32:58.724730    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:58.724730    4896 main.go:141] libmachine: Waiting for host to start...
	I0604 23:32:58.724730    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:01.116708    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:01.116708    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:01.116708    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:03.826898    4896 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:33:03.827125    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:04.829994    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:07.194239    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:07.197075    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:07.197118    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:09.890259    4896 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:33:09.890259    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:10.891822    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:13.239384    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:13.239384    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:13.242526    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:15.997068    4896 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:33:15.997068    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:17.019354    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:19.371699    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:19.371699    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:19.384823    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:22.113040    4896 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:33:22.113040    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:23.128061    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:25.502676    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:25.502676    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:25.502848    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:28.279529    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:28.279529    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:28.285441    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:30.578651    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:30.578651    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:30.578939    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:33.376058    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:33.376058    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:33.376569    4896 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:33:33.379096    4896 machine.go:94] provisionDockerMachine start ...
	I0604 23:33:33.379183    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:35.647263    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:35.647263    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:35.652283    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:38.401883    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:38.401883    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:38.421527    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:33:38.422108    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:33:38.422108    4896 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:33:38.570866    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:33:38.571026    4896 buildroot.go:166] provisioning hostname "multinode-022000-m03"
	I0604 23:33:38.571185    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:40.839783    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:40.839783    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:40.852720    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:43.557954    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:43.557954    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:43.576105    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:33:43.576713    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:33:43.576889    4896 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000-m03 && echo "multinode-022000-m03" | sudo tee /etc/hostname
	I0604 23:33:43.780818    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000-m03
	
	I0604 23:33:43.780818    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:46.043473    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:46.057007    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:46.057007    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:48.779572    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:48.779572    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:48.797531    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:33:48.798238    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:33:48.798822    4896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:33:48.950478    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:33:48.950478    4896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:33:48.950478    4896 buildroot.go:174] setting up certificates
	I0604 23:33:48.950478    4896 provision.go:84] configureAuth start
	I0604 23:33:48.950478    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:51.217823    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:51.218168    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:51.218168    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:53.989487    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:54.002977    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:54.003102    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:33:56.341311    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:33:56.341311    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:56.341311    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:33:59.127027    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:33:59.139778    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:33:59.139778    4896 provision.go:143] copyHostCerts
	I0604 23:33:59.140062    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:33:59.140543    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:33:59.140650    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:33:59.141422    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:33:59.143242    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:33:59.143738    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:33:59.143738    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:33:59.144217    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:33:59.145586    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:33:59.146071    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:33:59.146071    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:33:59.146635    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:33:59.147711    4896 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000-m03 san=[127.0.0.1 172.20.128.16 localhost minikube multinode-022000-m03]
	I0604 23:33:59.480532    4896 provision.go:177] copyRemoteCerts
	I0604 23:33:59.501266    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:33:59.501338    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:01.773949    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:01.785998    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:01.786111    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:04.552414    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:04.552414    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:04.565601    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:34:04.679181    4896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1778037s)
	I0604 23:34:04.679181    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:34:04.679765    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:34:04.730707    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:34:04.731947    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0604 23:34:04.783286    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:34:04.786522    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 23:34:04.836395    4896 provision.go:87] duration metric: took 15.885797s to configureAuth
	I0604 23:34:04.836395    4896 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:34:04.837320    4896 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:34:04.837320    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:07.109165    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:07.109458    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:07.109458    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:09.828941    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:09.828941    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:09.848833    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:34:09.849871    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:34:09.849871    4896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:34:09.985395    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:34:09.985395    4896 buildroot.go:70] root file system type: tmpfs
	I0604 23:34:09.985561    4896 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:34:09.985677    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:12.265118    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:12.265118    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:12.265763    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:14.977873    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:14.990924    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:14.999656    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:34:15.000448    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:34:15.000448    4896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:34:15.163653    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:34:15.163742    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:17.402172    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:17.406509    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:17.406509    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:20.115627    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:20.115627    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:20.124078    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:34:20.124708    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:34:20.124708    4896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 23:34:22.432119    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 23:34:22.432264    4896 machine.go:97] duration metric: took 49.0527108s to provisionDockerMachine
	I0604 23:34:22.432264    4896 start.go:293] postStartSetup for "multinode-022000-m03" (driver="hyperv")
	I0604 23:34:22.432346    4896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 23:34:22.446377    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 23:34:22.446377    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:24.754164    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:24.754164    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:24.767406    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:27.498678    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:27.498678    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:27.505501    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:34:27.625839    4896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.179423s)
	I0604 23:34:27.641641    4896 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 23:34:27.651634    4896 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 23:34:27.651634    4896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 23:34:27.652038    4896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 23:34:27.652780    4896 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 23:34:27.652780    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 23:34:27.664708    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 23:34:27.688915    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 23:34:27.745593    4896 start.go:296] duration metric: took 5.3132069s for postStartSetup
	I0604 23:34:27.745593    4896 fix.go:56] duration metric: took 1m34.6377518s for fixHost
	I0604 23:34:27.745593    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:30.039100    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:30.053213    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:30.053429    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:32.780869    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:32.780869    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:32.800997    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:34:32.800997    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:34:32.800997    4896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0604 23:34:32.940287    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717544072.934468388
	
	I0604 23:34:32.940287    4896 fix.go:216] guest clock: 1717544072.934468388
	I0604 23:34:32.940287    4896 fix.go:229] Guest: 2024-06-04 23:34:32.934468388 +0000 UTC Remote: 2024-06-04 23:34:27.7455936 +0000 UTC m=+97.183119401 (delta=5.188874788s)
	I0604 23:34:32.940287    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:35.236109    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:35.236109    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:35.248493    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:37.995237    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:37.995237    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:38.004617    4896 main.go:141] libmachine: Using SSH client type: native
	I0604 23:34:38.005250    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
	I0604 23:34:38.005250    4896 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717544072
	I0604 23:34:38.159316    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:34:32 UTC 2024
	
	I0604 23:34:38.159316    4896 fix.go:236] clock set: Tue Jun  4 23:34:32 UTC 2024
	 (err=<nil>)
	I0604 23:34:38.159316    4896 start.go:83] releasing machines lock for "multinode-022000-m03", held for 1m45.0513943s
	I0604 23:34:38.159656    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:40.441262    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:40.454716    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:40.454716    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:43.187415    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:43.200398    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:43.204818    4896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 23:34:43.205349    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:43.220327    4896 ssh_runner.go:195] Run: systemctl --version
	I0604 23:34:43.220327    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:45.569023    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:34:48.385616    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:48.385616    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:48.399460    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:34:48.430107    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:34:48.430107    4896 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:34:48.437175    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:34:48.619865    4896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4150046s)
	I0604 23:34:48.619865    4896 ssh_runner.go:235] Completed: systemctl --version: (5.3994962s)
	I0604 23:34:48.634554    4896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0604 23:34:48.645004    4896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 23:34:48.660347    4896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 23:34:48.690764    4896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 23:34:48.690764    4896 start.go:494] detecting cgroup driver to use...
	I0604 23:34:48.691096    4896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:34:48.737244    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 23:34:48.772756    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 23:34:48.792210    4896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 23:34:48.805447    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 23:34:48.843306    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:34:48.880505    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 23:34:48.919929    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:34:48.957416    4896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 23:34:48.996278    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 23:34:49.037209    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 23:34:49.075374    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 23:34:49.111616    4896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 23:34:49.146382    4896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 23:34:49.178690    4896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:34:49.384522    4896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 23:34:49.423924    4896 start.go:494] detecting cgroup driver to use...
	I0604 23:34:49.437671    4896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 23:34:49.479064    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:34:49.520987    4896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 23:34:49.581156    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:34:49.625918    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:34:49.667825    4896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 23:34:49.736482    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:34:49.767560    4896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:34:49.817975    4896 ssh_runner.go:195] Run: which cri-dockerd
	I0604 23:34:49.838210    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 23:34:49.857820    4896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 23:34:49.910297    4896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 23:34:50.120475    4896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 23:34:50.327638    4896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 23:34:50.327638    4896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 23:34:50.384444    4896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:34:50.593064    4896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:35:51.732703    4896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1389569s)
	I0604 23:35:51.744069    4896 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0604 23:35:51.782216    4896 out.go:177] 
	W0604 23:35:51.782510    4896 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 04 23:34:20 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.750374308Z" level=info msg="Starting up"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.751291997Z" level=info msg="containerd not running, starting managed containerd"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.752740479Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.803853355Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830327931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830386630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830469729Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830489029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831028422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831127221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831350518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831454817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831495217Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831507616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832007910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832913299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.835951262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836058761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836213659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836373957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837084848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837226647Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837548543Z" level=info msg="metadata content store policy set" policy=shared
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847687419Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847759418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847782318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847884016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847932116Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848020315Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848372510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848486509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848549208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848565208Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848596508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848612707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848627607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848649407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848667207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848683207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848729906Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848873404Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848913404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848931704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848947003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848963703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848979403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848995003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849010803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849072502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849104901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849124401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849138401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849154001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849168901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849187300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849211600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849234000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849248000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849303299Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849326199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849356598Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849370698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849382298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849395898Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849407598Z" level=info msg="NRI interface is disabled by configuration."
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849668494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849793093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849900292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849924191Z" level=info msg="containerd successfully booted in 0.052179s"
	Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.817975646Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.884393485Z" level=info msg="Loading containers: start."
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.237052095Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.329899911Z" level=info msg="Loading containers: done."
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.357968283Z" level=info msg="Docker daemon" commit=8e96db1 containerd-snapshotter=false storage-driver=overlay2 version=26.1.3
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.359142424Z" level=info msg="Daemon has completed initialization"
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.421116770Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 04 23:34:22 multinode-022000-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.422029302Z" level=info msg="API listen on [::]:2376"
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.617792370Z" level=info msg="Processing signal 'terminated'"
	Jun 04 23:34:50 multinode-022000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619424944Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619724058Z" level=info msg="Daemon shutdown complete"
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619791961Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619850863Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 04 23:34:51 multinode-022000-m03 dockerd[1034]: time="2024-06-04T23:34:51.698252103Z" level=info msg="Starting up"
	Jun 04 23:35:51 multinode-022000-m03 dockerd[1034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 04 23:34:20 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.750374308Z" level=info msg="Starting up"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.751291997Z" level=info msg="containerd not running, starting managed containerd"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.752740479Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.803853355Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830327931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830386630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830469729Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830489029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831028422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831127221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831350518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831454817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831495217Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831507616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832007910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832913299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.835951262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836058761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836213659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836373957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837084848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837226647Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837548543Z" level=info msg="metadata content store policy set" policy=shared
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847687419Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847759418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847782318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847884016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847932116Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848020315Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848372510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848486509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848549208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848565208Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848596508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848612707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848627607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848649407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848667207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848683207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848729906Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848873404Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848913404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848931704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848947003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848963703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848979403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848995003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849010803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849072502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849104901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849124401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849138401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849154001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849168901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849187300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849211600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849234000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849248000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849303299Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849326199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849356598Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849370698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849382298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849395898Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849407598Z" level=info msg="NRI interface is disabled by configuration."
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849668494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849793093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849900292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849924191Z" level=info msg="containerd successfully booted in 0.052179s"
	Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.817975646Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.884393485Z" level=info msg="Loading containers: start."
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.237052095Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.329899911Z" level=info msg="Loading containers: done."
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.357968283Z" level=info msg="Docker daemon" commit=8e96db1 containerd-snapshotter=false storage-driver=overlay2 version=26.1.3
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.359142424Z" level=info msg="Daemon has completed initialization"
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.421116770Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 04 23:34:22 multinode-022000-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.422029302Z" level=info msg="API listen on [::]:2376"
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.617792370Z" level=info msg="Processing signal 'terminated'"
	Jun 04 23:34:50 multinode-022000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619424944Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619724058Z" level=info msg="Daemon shutdown complete"
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619791961Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619850863Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 04 23:34:51 multinode-022000-m03 dockerd[1034]: time="2024-06-04T23:34:51.698252103Z" level=info msg="Starting up"
	Jun 04 23:35:51 multinode-022000-m03 dockerd[1034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 04 23:35:51 multinode-022000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0604 23:35:51.785570    4896 out.go:239] * 
	* 
	W0604 23:35:51.821752    4896 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_3.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_3.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0604 23:35:51.823447    4896 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: W0604 23:32:50.656328    4896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 23:32:50.751695    4896 out.go:291] Setting OutFile to fd 1124 ...
I0604 23:32:50.763550    4896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 23:32:50.763550    4896 out.go:304] Setting ErrFile to fd 876...
I0604 23:32:50.763550    4896 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 23:32:50.794991    4896 mustload.go:65] Loading cluster: multinode-022000
I0604 23:32:50.795766    4896 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 23:32:50.797132    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:32:53.096023    4896 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0604 23:32:53.096023    4896 main.go:141] libmachine: [stderr =====>] : 
W0604 23:32:53.096023    4896 host.go:58] "multinode-022000-m03" host status: Stopped
I0604 23:32:53.100599    4896 out.go:177] * Starting "multinode-022000-m03" worker node in "multinode-022000" cluster
I0604 23:32:53.103197    4896 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0604 23:32:53.103331    4896 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0604 23:32:53.103331    4896 cache.go:56] Caching tarball of preloaded images
I0604 23:32:53.103860    4896 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0604 23:32:53.104122    4896 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0604 23:32:53.104122    4896 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
I0604 23:32:53.106701    4896 start.go:360] acquireMachinesLock for multinode-022000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0604 23:32:53.107130    4896 start.go:364] duration metric: took 372.3µs to acquireMachinesLock for "multinode-022000-m03"
I0604 23:32:53.107130    4896 start.go:96] Skipping create...Using existing machine configuration
I0604 23:32:53.107330    4896 fix.go:54] fixHost starting: m03
I0604 23:32:53.107454    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:32:55.401236    4896 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0604 23:32:55.401236    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:32:55.410288    4896 fix.go:112] recreateIfNeeded on multinode-022000-m03: state=Stopped err=<nil>
W0604 23:32:55.410288    4896 fix.go:138] unexpected machine state, will restart: <nil>
I0604 23:32:55.413589    4896 out.go:177] * Restarting existing hyperv VM for "multinode-022000-m03" ...
I0604 23:32:55.416304    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000-m03
I0604 23:32:58.724730    4896 main.go:141] libmachine: [stdout =====>] : 
I0604 23:32:58.724730    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:32:58.724730    4896 main.go:141] libmachine: Waiting for host to start...
I0604 23:32:58.724730    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:01.116708    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:01.116708    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:01.116708    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:03.826898    4896 main.go:141] libmachine: [stdout =====>] : 
I0604 23:33:03.827125    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:04.829994    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:07.194239    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:07.197075    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:07.197118    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:09.890259    4896 main.go:141] libmachine: [stdout =====>] : 
I0604 23:33:09.890259    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:10.891822    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:13.239384    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:13.239384    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:13.242526    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:15.997068    4896 main.go:141] libmachine: [stdout =====>] : 
I0604 23:33:15.997068    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:17.019354    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:19.371699    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:19.371699    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:19.384823    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:22.113040    4896 main.go:141] libmachine: [stdout =====>] : 
I0604 23:33:22.113040    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:23.128061    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:25.502676    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:25.502676    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:25.502848    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:28.279529    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:28.279529    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:28.285441    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:30.578651    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:30.578651    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:30.578939    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:33.376058    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:33.376058    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:33.376569    4896 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
I0604 23:33:33.379096    4896 machine.go:94] provisionDockerMachine start ...
I0604 23:33:33.379183    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:35.647263    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:35.647263    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:35.652283    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:38.401883    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:38.401883    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:38.421527    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:33:38.422108    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:33:38.422108    4896 main.go:141] libmachine: About to run SSH command:
hostname
I0604 23:33:38.570866    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0604 23:33:38.571026    4896 buildroot.go:166] provisioning hostname "multinode-022000-m03"
I0604 23:33:38.571185    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:40.839783    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:40.839783    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:40.852720    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:43.557954    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:43.557954    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:43.576105    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:33:43.576713    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:33:43.576889    4896 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-022000-m03 && echo "multinode-022000-m03" | sudo tee /etc/hostname
I0604 23:33:43.780818    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000-m03

                                                
                                                
I0604 23:33:43.780818    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:46.043473    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:46.057007    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:46.057007    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:48.779572    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:48.779572    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:48.797531    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:33:48.798238    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:33:48.798822    4896 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-022000-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-022000-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0604 23:33:48.950478    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0604 23:33:48.950478    4896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0604 23:33:48.950478    4896 buildroot.go:174] setting up certificates
I0604 23:33:48.950478    4896 provision.go:84] configureAuth start
I0604 23:33:48.950478    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:51.217823    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:51.218168    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:51.218168    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:53.989487    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:54.002977    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:54.003102    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:33:56.341311    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:33:56.341311    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:56.341311    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:33:59.127027    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:33:59.139778    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:33:59.139778    4896 provision.go:143] copyHostCerts
I0604 23:33:59.140062    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
I0604 23:33:59.140543    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0604 23:33:59.140650    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0604 23:33:59.141422    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
I0604 23:33:59.143242    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
I0604 23:33:59.143738    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0604 23:33:59.143738    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0604 23:33:59.144217    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0604 23:33:59.145586    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
I0604 23:33:59.146071    4896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0604 23:33:59.146071    4896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0604 23:33:59.146635    4896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
I0604 23:33:59.147711    4896 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000-m03 san=[127.0.0.1 172.20.128.16 localhost minikube multinode-022000-m03]
I0604 23:33:59.480532    4896 provision.go:177] copyRemoteCerts
I0604 23:33:59.501266    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0604 23:33:59.501338    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:01.773949    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:01.785998    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:01.786111    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:04.552414    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:04.552414    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:04.565601    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
I0604 23:34:04.679181    4896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1778037s)
I0604 23:34:04.679181    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0604 23:34:04.679765    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0604 23:34:04.730707    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0604 23:34:04.731947    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0604 23:34:04.783286    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0604 23:34:04.786522    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0604 23:34:04.836395    4896 provision.go:87] duration metric: took 15.885797s to configureAuth
I0604 23:34:04.836395    4896 buildroot.go:189] setting minikube options for container-runtime
I0604 23:34:04.837320    4896 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 23:34:04.837320    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:07.109165    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:07.109458    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:07.109458    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:09.828941    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:09.828941    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:09.848833    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:34:09.849871    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:34:09.849871    4896 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0604 23:34:09.985395    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0604 23:34:09.985395    4896 buildroot.go:70] root file system type: tmpfs
I0604 23:34:09.985561    4896 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0604 23:34:09.985677    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:12.265118    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:12.265118    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:12.265763    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:14.977873    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:14.990924    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:14.999656    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:34:15.000448    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:34:15.000448    4896 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0604 23:34:15.163653    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0604 23:34:15.163742    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:17.402172    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:17.406509    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:17.406509    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:20.115627    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:20.115627    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:20.124078    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:34:20.124708    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:34:20.124708    4896 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0604 23:34:22.432119    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0604 23:34:22.432264    4896 machine.go:97] duration metric: took 49.0527108s to provisionDockerMachine
I0604 23:34:22.432264    4896 start.go:293] postStartSetup for "multinode-022000-m03" (driver="hyperv")
I0604 23:34:22.432346    4896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0604 23:34:22.446377    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0604 23:34:22.446377    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:24.754164    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:24.754164    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:24.767406    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:27.498678    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:27.498678    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:27.505501    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
I0604 23:34:27.625839    4896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.179423s)
I0604 23:34:27.641641    4896 ssh_runner.go:195] Run: cat /etc/os-release
I0604 23:34:27.651634    4896 info.go:137] Remote host: Buildroot 2023.02.9
I0604 23:34:27.651634    4896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0604 23:34:27.652038    4896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0604 23:34:27.652780    4896 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
I0604 23:34:27.652780    4896 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
I0604 23:34:27.664708    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0604 23:34:27.688915    4896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
I0604 23:34:27.745593    4896 start.go:296] duration metric: took 5.3132069s for postStartSetup
I0604 23:34:27.745593    4896 fix.go:56] duration metric: took 1m34.6377518s for fixHost
I0604 23:34:27.745593    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:30.039100    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:30.053213    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:30.053429    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:32.780869    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:32.780869    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:32.800997    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:34:32.800997    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:34:32.800997    4896 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0604 23:34:32.940287    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717544072.934468388

                                                
                                                
I0604 23:34:32.940287    4896 fix.go:216] guest clock: 1717544072.934468388
I0604 23:34:32.940287    4896 fix.go:229] Guest: 2024-06-04 23:34:32.934468388 +0000 UTC Remote: 2024-06-04 23:34:27.7455936 +0000 UTC m=+97.183119401 (delta=5.188874788s)
I0604 23:34:32.940287    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:35.236109    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:35.236109    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:35.248493    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:37.995237    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:37.995237    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:38.004617    4896 main.go:141] libmachine: Using SSH client type: native
I0604 23:34:38.005250    4896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.16 22 <nil> <nil>}
I0604 23:34:38.005250    4896 main.go:141] libmachine: About to run SSH command:
sudo date -s @1717544072
I0604 23:34:38.159316    4896 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:34:32 UTC 2024

                                                
                                                
I0604 23:34:38.159316    4896 fix.go:236] clock set: Tue Jun  4 23:34:32 UTC 2024
(err=<nil>)
I0604 23:34:38.159316    4896 start.go:83] releasing machines lock for "multinode-022000-m03", held for 1m45.0513943s
I0604 23:34:38.159656    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:40.441262    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:40.454716    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:40.454716    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:43.187415    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:43.200398    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:43.204818    4896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0604 23:34:43.205349    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:43.220327    4896 ssh_runner.go:195] Run: systemctl --version
I0604 23:34:43.220327    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
I0604 23:34:45.569023    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:45.569023    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:45.569023    4896 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 23:34:45.569023    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:45.569023    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:45.569023    4896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
I0604 23:34:48.385616    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:48.385616    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:48.399460    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
I0604 23:34:48.430107    4896 main.go:141] libmachine: [stdout =====>] : 172.20.128.16

                                                
                                                
I0604 23:34:48.430107    4896 main.go:141] libmachine: [stderr =====>] : 
I0604 23:34:48.437175    4896 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
I0604 23:34:48.619865    4896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4150046s)
I0604 23:34:48.619865    4896 ssh_runner.go:235] Completed: systemctl --version: (5.3994962s)
I0604 23:34:48.634554    4896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0604 23:34:48.645004    4896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0604 23:34:48.660347    4896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0604 23:34:48.690764    4896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0604 23:34:48.690764    4896 start.go:494] detecting cgroup driver to use...
I0604 23:34:48.691096    4896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0604 23:34:48.737244    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0604 23:34:48.772756    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0604 23:34:48.792210    4896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0604 23:34:48.805447    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0604 23:34:48.843306    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0604 23:34:48.880505    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0604 23:34:48.919929    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0604 23:34:48.957416    4896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0604 23:34:48.996278    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0604 23:34:49.037209    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0604 23:34:49.075374    4896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0604 23:34:49.111616    4896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0604 23:34:49.146382    4896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0604 23:34:49.178690    4896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0604 23:34:49.384522    4896 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0604 23:34:49.423924    4896 start.go:494] detecting cgroup driver to use...
I0604 23:34:49.437671    4896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0604 23:34:49.479064    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0604 23:34:49.520987    4896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0604 23:34:49.581156    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0604 23:34:49.625918    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0604 23:34:49.667825    4896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0604 23:34:49.736482    4896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0604 23:34:49.767560    4896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0604 23:34:49.817975    4896 ssh_runner.go:195] Run: which cri-dockerd
I0604 23:34:49.838210    4896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0604 23:34:49.857820    4896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0604 23:34:49.910297    4896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0604 23:34:50.120475    4896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0604 23:34:50.327638    4896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0604 23:34:50.327638    4896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0604 23:34:50.384444    4896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0604 23:34:50.593064    4896 ssh_runner.go:195] Run: sudo systemctl restart docker
I0604 23:35:51.732703    4896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1389569s)
I0604 23:35:51.744069    4896 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0604 23:35:51.782216    4896 out.go:177] 
W0604 23:35:51.782510    4896 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jun 04 23:34:20 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.750374308Z" level=info msg="Starting up"
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.751291997Z" level=info msg="containerd not running, starting managed containerd"
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.752740479Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.803853355Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830327931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830386630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830469729Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830489029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831028422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831127221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831350518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831454817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831495217Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831507616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832007910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832913299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.835951262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836058761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836213659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836373957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837084848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837226647Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837548543Z" level=info msg="metadata content store policy set" policy=shared
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847687419Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847759418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847782318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847884016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847932116Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848020315Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848372510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848486509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848549208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848565208Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848596508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848612707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848627607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848649407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848667207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848683207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848729906Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848873404Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848913404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848931704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848947003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848963703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848979403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848995003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849010803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849072502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849104901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849124401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849138401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849154001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849168901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849187300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849211600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849234000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849248000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849303299Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849326199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849356598Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849370698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849382298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849395898Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849407598Z" level=info msg="NRI interface is disabled by configuration."
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849668494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849793093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849900292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849924191Z" level=info msg="containerd successfully booted in 0.052179s"
Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.817975646Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.884393485Z" level=info msg="Loading containers: start."
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.237052095Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.329899911Z" level=info msg="Loading containers: done."
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.357968283Z" level=info msg="Docker daemon" commit=8e96db1 containerd-snapshotter=false storage-driver=overlay2 version=26.1.3
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.359142424Z" level=info msg="Daemon has completed initialization"
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.421116770Z" level=info msg="API listen on /var/run/docker.sock"
Jun 04 23:34:22 multinode-022000-m03 systemd[1]: Started Docker Application Container Engine.
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.422029302Z" level=info msg="API listen on [::]:2376"
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.617792370Z" level=info msg="Processing signal 'terminated'"
Jun 04 23:34:50 multinode-022000-m03 systemd[1]: Stopping Docker Application Container Engine...
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619424944Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619724058Z" level=info msg="Daemon shutdown complete"
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619791961Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619850863Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: docker.service: Deactivated successfully.
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Stopped Docker Application Container Engine.
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 04 23:34:51 multinode-022000-m03 dockerd[1034]: time="2024-06-04T23:34:51.698252103Z" level=info msg="Starting up"
Jun 04 23:35:51 multinode-022000-m03 dockerd[1034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jun 04 23:34:20 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.750374308Z" level=info msg="Starting up"
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.751291997Z" level=info msg="containerd not running, starting managed containerd"
Jun 04 23:34:20 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:20.752740479Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.803853355Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830327931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830386630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830469729Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.830489029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831028422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831127221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831350518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831454817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831495217Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.831507616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832007910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.832913299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.835951262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836058761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836213659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.836373957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837084848Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837226647Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.837548543Z" level=info msg="metadata content store policy set" policy=shared
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847687419Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847759418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847782318Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847884016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.847932116Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848020315Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848372510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848486509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848549208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848565208Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848596508Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848612707Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848627607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848649407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848667207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848683207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848729906Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848873404Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848913404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848931704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848947003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848963703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848979403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.848995003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849010803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849072502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849104901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849124401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849138401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849154001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849168901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849187300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849211600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849234000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849248000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849303299Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849326199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849356598Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849370698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849382298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849395898Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849407598Z" level=info msg="NRI interface is disabled by configuration."
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849668494Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849793093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849900292Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 04 23:34:20 multinode-022000-m03 dockerd[662]: time="2024-06-04T23:34:20.849924191Z" level=info msg="containerd successfully booted in 0.052179s"
Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.817975646Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 04 23:34:21 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:21.884393485Z" level=info msg="Loading containers: start."
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.237052095Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.329899911Z" level=info msg="Loading containers: done."
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.357968283Z" level=info msg="Docker daemon" commit=8e96db1 containerd-snapshotter=false storage-driver=overlay2 version=26.1.3
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.359142424Z" level=info msg="Daemon has completed initialization"
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.421116770Z" level=info msg="API listen on /var/run/docker.sock"
Jun 04 23:34:22 multinode-022000-m03 systemd[1]: Started Docker Application Container Engine.
Jun 04 23:34:22 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:22.422029302Z" level=info msg="API listen on [::]:2376"
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.617792370Z" level=info msg="Processing signal 'terminated'"
Jun 04 23:34:50 multinode-022000-m03 systemd[1]: Stopping Docker Application Container Engine...
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619424944Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619724058Z" level=info msg="Daemon shutdown complete"
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619791961Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 04 23:34:50 multinode-022000-m03 dockerd[656]: time="2024-06-04T23:34:50.619850863Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: docker.service: Deactivated successfully.
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Stopped Docker Application Container Engine.
Jun 04 23:34:51 multinode-022000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 04 23:34:51 multinode-022000-m03 dockerd[1034]: time="2024-06-04T23:34:51.698252103Z" level=info msg="Starting up"
Jun 04 23:35:51 multinode-022000-m03 dockerd[1034]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 04 23:35:51 multinode-022000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0604 23:35:51.785570    4896 out.go:239] * 
* 
W0604 23:35:51.821752    4896 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_3.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_5d8e12b0f871eb72ad0fbd8a3f088de82e3341c0_3.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0604 23:35:51.823447    4896 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-windows-amd64.exe -p multinode-022000 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-022000 status -v=7 --alsologtostderr: exit status 2 (38.064898s)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:35:52.383768    4480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 23:35:52.478265    4480 out.go:291] Setting OutFile to fd 1328 ...
	I0604 23:35:52.481349    4480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:35:52.481349    4480 out.go:304] Setting ErrFile to fd 1176...
	I0604 23:35:52.481641    4480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:35:52.502138    4480 out.go:298] Setting JSON to false
	I0604 23:35:52.502241    4480 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:35:52.502364    4480 notify.go:220] Checking for updates...
	I0604 23:35:52.503094    4480 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:35:52.503192    4480 status.go:255] checking status of multinode-022000 ...
	I0604 23:35:52.504175    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:35:54.943490    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:35:54.943572    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:35:54.943572    4480 status.go:330] multinode-022000 host status = "Running" (err=<nil>)
	I0604 23:35:54.943663    4480 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:35:54.944369    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:35:57.248511    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:35:57.248511    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:35:57.261257    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:35:59.986275    4480 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:35:59.986488    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:35:59.986488    4480 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:35:59.998169    4480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:35:59.998169    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:36:02.274283    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:02.274283    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:02.274283    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:04.994655    4480 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:36:04.994655    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:05.007254    4480 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:36:05.107392    4480 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1091824s)
	I0604 23:36:05.121437    4480 ssh_runner.go:195] Run: systemctl --version
	I0604 23:36:05.143152    4480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:36:05.174790    4480 kubeconfig.go:125] found "multinode-022000" server: "https://172.20.128.97:8443"
	I0604 23:36:05.174790    4480 api_server.go:166] Checking apiserver status ...
	I0604 23:36:05.187127    4480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 23:36:05.234383    4480 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup
	W0604 23:36:05.255558    4480 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 23:36:05.268406    4480 ssh_runner.go:195] Run: ls
	I0604 23:36:05.276953    4480 api_server.go:253] Checking apiserver healthz at https://172.20.128.97:8443/healthz ...
	I0604 23:36:05.283698    4480 api_server.go:279] https://172.20.128.97:8443/healthz returned 200:
	ok
	I0604 23:36:05.283698    4480 status.go:422] multinode-022000 apiserver status = Running (err=<nil>)
	I0604 23:36:05.283698    4480 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:36:05.283698    4480 status.go:255] checking status of multinode-022000-m02 ...
	I0604 23:36:05.285740    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:07.525090    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:07.538305    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:07.538511    4480 status.go:330] multinode-022000-m02 host status = "Running" (err=<nil>)
	I0604 23:36:07.538573    4480 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:36:07.539369    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:09.803031    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:09.814653    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:09.814802    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:12.475066    4480 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:36:12.475066    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:12.475066    4480 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:36:12.499867    4480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:36:12.499867    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:14.787136    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:14.787136    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:14.787136    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:17.490807    4480 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:36:17.490807    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:17.490807    4480 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:36:17.583982    4480 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0840737s)
	I0604 23:36:17.596444    4480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:36:17.626320    4480 status.go:257] multinode-022000-m02 status: &{Name:multinode-022000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:36:17.626320    4480 status.go:255] checking status of multinode-022000-m03 ...
	I0604 23:36:17.627293    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:36:19.910715    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:19.922972    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:19.922972    4480 status.go:330] multinode-022000-m03 host status = "Running" (err=<nil>)
	I0604 23:36:19.922972    4480 host.go:66] Checking if "multinode-022000-m03" exists ...
	I0604 23:36:19.923872    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:36:22.257738    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:22.257738    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:22.271032    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:25.084511    4480 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:36:25.084511    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:25.084511    4480 host.go:66] Checking if "multinode-022000-m03" exists ...
	I0604 23:36:25.098936    4480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:36:25.098936    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:36:27.408856    4480 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:27.408856    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:27.408971    4480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:30.123317    4480 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:36:30.135905    4480 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:30.136364    4480 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:36:30.237297    4480 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1383201s)
	I0604 23:36:30.252543    4480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:36:30.278319    4480 status.go:257] multinode-022000-m03 status: &{Name:multinode-022000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status -v=7 --alsologtostderr
E0604 23:36:45.689071   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-022000 status -v=7 --alsologtostderr: exit status 2 (38.1929113s)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:36:31.665130    3900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 23:36:31.752408    3900 out.go:291] Setting OutFile to fd 1376 ...
	I0604 23:36:31.756495    3900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:36:31.756495    3900 out.go:304] Setting ErrFile to fd 1380...
	I0604 23:36:31.756495    3900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:36:31.771287    3900 out.go:298] Setting JSON to false
	I0604 23:36:31.771287    3900 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:36:31.771287    3900 notify.go:220] Checking for updates...
	I0604 23:36:31.773109    3900 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:36:31.773109    3900 status.go:255] checking status of multinode-022000 ...
	I0604 23:36:31.773852    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:36:34.054685    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:34.054685    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:34.054685    3900 status.go:330] multinode-022000 host status = "Running" (err=<nil>)
	I0604 23:36:34.054685    3900 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:36:34.055837    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:36:36.406921    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:36.406921    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:36.406921    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:39.127363    3900 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:36:39.127363    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:39.139873    3900 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:36:39.153116    3900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:36:39.153116    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:36:41.422780    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:41.422780    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:41.422780    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:44.114777    3900 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:36:44.114777    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:44.127535    3900 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:36:44.227853    3900 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.074697s)
	I0604 23:36:44.241185    3900 ssh_runner.go:195] Run: systemctl --version
	I0604 23:36:44.261756    3900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:36:44.288711    3900 kubeconfig.go:125] found "multinode-022000" server: "https://172.20.128.97:8443"
	I0604 23:36:44.288783    3900 api_server.go:166] Checking apiserver status ...
	I0604 23:36:44.300956    3900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 23:36:44.345505    3900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup
	W0604 23:36:44.367675    3900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 23:36:44.383752    3900 ssh_runner.go:195] Run: ls
	I0604 23:36:44.392845    3900 api_server.go:253] Checking apiserver healthz at https://172.20.128.97:8443/healthz ...
	I0604 23:36:44.401486    3900 api_server.go:279] https://172.20.128.97:8443/healthz returned 200:
	ok
	I0604 23:36:44.401486    3900 status.go:422] multinode-022000 apiserver status = Running (err=<nil>)
	I0604 23:36:44.401486    3900 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:36:44.401486    3900 status.go:255] checking status of multinode-022000-m02 ...
	I0604 23:36:44.405098    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:46.696775    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:46.696775    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:46.696775    3900 status.go:330] multinode-022000-m02 host status = "Running" (err=<nil>)
	I0604 23:36:46.696775    3900 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:36:46.703299    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:49.003857    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:49.010185    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:49.010264    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:51.776556    3900 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:36:51.776628    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:51.776655    3900 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:36:51.791140    3900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:36:51.791140    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:36:54.108771    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:54.122987    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:54.122987    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:36:56.880475    3900 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:36:56.880713    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:56.880782    3900 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:36:56.980501    3900 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1893198s)
	I0604 23:36:56.996468    3900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:36:57.026273    3900 status.go:257] multinode-022000-m02 status: &{Name:multinode-022000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:36:57.026273    3900 status.go:255] checking status of multinode-022000-m03 ...
	I0604 23:36:57.026940    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:36:59.301047    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:36:59.301047    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:36:59.301047    3900 status.go:330] multinode-022000-m03 host status = "Running" (err=<nil>)
	I0604 23:36:59.301047    3900 host.go:66] Checking if "multinode-022000-m03" exists ...
	I0604 23:36:59.302400    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:37:01.693081    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:37:01.706292    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:37:01.706292    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:37:04.546909    3900 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:37:04.546980    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:37:04.546980    3900 host.go:66] Checking if "multinode-022000-m03" exists ...
	I0604 23:37:04.559662    3900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:37:04.559662    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:37:06.835383    3900 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:37:06.835383    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:37:06.835383    3900 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m03 ).networkadapters[0]).ipaddresses[0]
	I0604 23:37:09.565259    3900 main.go:141] libmachine: [stdout =====>] : 172.20.128.16
	
	I0604 23:37:09.577398    3900 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:37:09.577398    3900 sshutil.go:53] new ssh client: &{IP:172.20.128.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m03\id_rsa Username:docker}
	I0604 23:37:09.681672    3900 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1219691s)
	I0604 23:37:09.692836    3900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:37:09.721187    3900 status.go:257] multinode-022000-m03 status: &{Name:multinode-022000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-022000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000: (12.8604029s)
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 logs -n 25: (9.1797788s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-022000 cp multinode-022000:/home/docker/cp-test.txt                                                            | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:26 UTC | 04 Jun 24 23:27 UTC |
	|         | multinode-022000-m03:/home/docker/cp-test_multinode-022000_multinode-022000-m03.txt                                      |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:27 UTC | 04 Jun 24 23:27 UTC |
	|         | multinode-022000 sudo cat                                                                                                |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n multinode-022000-m03 sudo cat                                                                    | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:27 UTC | 04 Jun 24 23:27 UTC |
	|         | /home/docker/cp-test_multinode-022000_multinode-022000-m03.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp testdata\cp-test.txt                                                                                 | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:27 UTC | 04 Jun 24 23:27 UTC |
	|         | multinode-022000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:27 UTC | 04 Jun 24 23:27 UTC |
	|         | multinode-022000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:27 UTC | 04 Jun 24 23:28 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:28 UTC | 04 Jun 24 23:28 UTC |
	|         | multinode-022000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:28 UTC | 04 Jun 24 23:28 UTC |
	|         | multinode-022000:/home/docker/cp-test_multinode-022000-m02_multinode-022000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:28 UTC | 04 Jun 24 23:28 UTC |
	|         | multinode-022000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n multinode-022000 sudo cat                                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:28 UTC | 04 Jun 24 23:28 UTC |
	|         | /home/docker/cp-test_multinode-022000-m02_multinode-022000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:28 UTC | 04 Jun 24 23:29 UTC |
	|         | multinode-022000-m03:/home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:29 UTC | 04 Jun 24 23:29 UTC |
	|         | multinode-022000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n multinode-022000-m03 sudo cat                                                                    | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:29 UTC | 04 Jun 24 23:29 UTC |
	|         | /home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp testdata\cp-test.txt                                                                                 | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:29 UTC | 04 Jun 24 23:29 UTC |
	|         | multinode-022000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:29 UTC | 04 Jun 24 23:29 UTC |
	|         | multinode-022000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:29 UTC | 04 Jun 24 23:30 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:30 UTC | 04 Jun 24 23:30 UTC |
	|         | multinode-022000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:30 UTC | 04 Jun 24 23:30 UTC |
	|         | multinode-022000:/home/docker/cp-test_multinode-022000-m03_multinode-022000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:30 UTC | 04 Jun 24 23:30 UTC |
	|         | multinode-022000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n multinode-022000 sudo cat                                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:30 UTC | 04 Jun 24 23:30 UTC |
	|         | /home/docker/cp-test_multinode-022000-m03_multinode-022000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt                                                        | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:30 UTC | 04 Jun 24 23:31 UTC |
	|         | multinode-022000-m02:/home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n                                                                                                  | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:31 UTC | 04 Jun 24 23:31 UTC |
	|         | multinode-022000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-022000 ssh -n multinode-022000-m02 sudo cat                                                                    | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:31 UTC | 04 Jun 24 23:31 UTC |
	|         | /home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-022000 node stop m03                                                                                           | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:31 UTC | 04 Jun 24 23:31 UTC |
	| node    | multinode-022000 node start                                                                                              | multinode-022000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 23:32 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 23:11:50
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 23:11:50.938566    6196 out.go:291] Setting OutFile to fd 1188 ...
	I0604 23:11:50.940378    6196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:11:50.940378    6196 out.go:304] Setting ErrFile to fd 884...
	I0604 23:11:50.940378    6196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:11:50.971352    6196 out.go:298] Setting JSON to false
	I0604 23:11:50.971903    6196 start.go:129] hostinfo: {"hostname":"minikube6","uptime":89960,"bootTime":1717452750,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 23:11:50.971903    6196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 23:11:50.982016    6196 out.go:177] * [multinode-022000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 23:11:50.986541    6196 notify.go:220] Checking for updates...
	I0604 23:11:50.988842    6196 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:11:50.991641    6196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 23:11:50.995727    6196 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 23:11:50.998398    6196 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 23:11:51.001798    6196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 23:11:51.007108    6196 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:11:51.007668    6196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 23:11:56.752972    6196 out.go:177] * Using the hyperv driver based on user configuration
	I0604 23:11:56.756869    6196 start.go:297] selected driver: hyperv
	I0604 23:11:56.756869    6196 start.go:901] validating driver "hyperv" against <nil>
	I0604 23:11:56.759166    6196 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 23:11:56.813162    6196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 23:11:56.814562    6196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:11:56.814624    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:11:56.814624    6196 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0604 23:11:56.814624    6196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0604 23:11:56.814624    6196 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:11:56.814624    6196 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 23:11:56.819653    6196 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0604 23:11:56.821799    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:11:56.821799    6196 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 23:11:56.821799    6196 cache.go:56] Caching tarball of preloaded images
	I0604 23:11:56.821799    6196 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:11:56.821799    6196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:11:56.823424    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:11:56.823619    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json: {Name:mk5c99c0f75f9c570ef890f215c48836e63daea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:11:56.823923    6196 start.go:360] acquireMachinesLock for multinode-022000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:11:56.825103    6196 start.go:364] duration metric: took 113.3µs to acquireMachinesLock for "multinode-022000"
	I0604 23:11:56.825311    6196 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 23:11:56.825311    6196 start.go:125] createHost starting for "" (driver="hyperv")
	I0604 23:11:56.828092    6196 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 23:11:56.829789    6196 start.go:159] libmachine.API.Create for "multinode-022000" (driver="hyperv")
	I0604 23:11:56.829789    6196 client.go:168] LocalClient.Create starting
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:11:56.830129    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:11:56.830905    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 23:11:56.831179    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:11:56.831179    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:11:56.831179    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 23:11:58.995896    6196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 23:11:58.995896    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:11:59.005843    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 23:12:00.929391    6196 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 23:12:00.938397    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:00.938397    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:12:02.556708    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:12:02.556708    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:02.565436    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:12:06.485499    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:12:06.485499    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:06.488564    6196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 23:12:07.034231    6196 main.go:141] libmachine: Creating SSH key...
	I0604 23:12:07.365189    6196 main.go:141] libmachine: Creating VM...
	I0604 23:12:07.365189    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:12:10.523521    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:12:10.523521    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:10.523521    6196 main.go:141] libmachine: Using switch "Default Switch"
	I0604 23:12:10.523893    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:12:12.375736    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:12:12.384112    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:12.384112    6196 main.go:141] libmachine: Creating VHD
	I0604 23:12:12.384112    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 23:12:16.381182    6196 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BCA69AF8-6241-4CFF-9F74-B5CC0E3602EB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 23:12:16.381182    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:16.381182    6196 main.go:141] libmachine: Writing magic tar header
	I0604 23:12:16.381182    6196 main.go:141] libmachine: Writing SSH key tar header
	I0604 23:12:16.390861    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 23:12:19.685450    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:19.697481    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:19.697732    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd' -SizeBytes 20000MB
	I0604 23:12:22.393203    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:22.393203    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:22.407069    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 23:12:26.455688    6196 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-022000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 23:12:26.455783    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:26.455783    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-022000 -DynamicMemoryEnabled $false
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:28.939102    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-022000 -Count 2
	I0604 23:12:31.327868    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:31.341976    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:31.342233    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\boot2docker.iso'
	I0604 23:12:34.140105    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:34.140105    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:34.143563    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-022000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\disk.vhd'
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:36.988077    6196 main.go:141] libmachine: Starting VM...
	I0604 23:12:36.988077    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000
	I0604 23:12:40.284458    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:40.297074    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:40.297074    6196 main.go:141] libmachine: Waiting for host to start...
	I0604 23:12:40.297074    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:42.757456    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:45.495421    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:45.495421    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:46.508549    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:48.901871    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:48.902033    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:48.902033    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:51.611701    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:51.611781    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:52.620226    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:12:55.036633    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:12:55.036633    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:55.036810    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:12:57.746024    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:12:57.752732    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:12:58.755831    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:01.192377    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:01.199150    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:01.199274    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:03.976813    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:13:03.976813    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:04.993010    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:07.426655    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:07.426655    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:07.426863    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:10.198860    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:10.211321    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:10.211321    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:12.506051    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:12.506112    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:12.506112    6196 machine.go:94] provisionDockerMachine start ...
	I0604 23:13:12.506112    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:14.888039    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:14.900262    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:14.900262    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:17.688030    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:17.700752    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:17.706504    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:17.715039    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:17.715039    6196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:13:17.845663    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:13:17.845786    6196 buildroot.go:166] provisioning hostname "multinode-022000"
	I0604 23:13:17.845786    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:20.170099    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:20.170099    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:20.183414    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:22.958992    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:22.958992    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:22.977984    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:22.978691    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:22.978691    6196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000 && echo "multinode-022000" | sudo tee /etc/hostname
	I0604 23:13:23.143053    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000
	
	I0604 23:13:23.143213    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:25.463698    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:25.463698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:25.463794    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:28.206323    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:28.206323    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:28.226179    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:28.226325    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:28.226325    6196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:13:28.375824    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:13:28.375824    6196 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:13:28.375824    6196 buildroot.go:174] setting up certificates
	I0604 23:13:28.375824    6196 provision.go:84] configureAuth start
	I0604 23:13:28.375824    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:30.675619    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:30.675619    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:30.688391    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:33.451482    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:35.756530    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:35.756530    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:35.756803    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:38.552123    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:38.552123    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:38.552123    6196 provision.go:143] copyHostCerts
	I0604 23:13:38.564937    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:13:38.565210    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:13:38.565296    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:13:38.565741    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:13:38.567371    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:13:38.567670    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:13:38.567670    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:13:38.567670    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:13:38.569073    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:13:38.569251    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:13:38.569251    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:13:38.569791    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:13:38.570720    6196 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000 san=[127.0.0.1 172.20.128.97 localhost minikube multinode-022000]
	I0604 23:13:38.743476    6196 provision.go:177] copyRemoteCerts
	I0604 23:13:38.762175    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:13:38.762175    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:41.045880    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:41.045880    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:41.058397    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:43.813292    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:43.813410    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:43.813903    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:13:43.922365    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1601493s)
	I0604 23:13:43.922365    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:13:43.922982    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:13:43.973646    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:13:43.973779    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0604 23:13:44.027509    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:13:44.028174    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 23:13:44.080145    6196 provision.go:87] duration metric: took 15.7041342s to configureAuth
	I0604 23:13:44.080215    6196 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:13:44.080837    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:13:44.080837    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:46.363849    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:46.364079    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:46.364150    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:49.068168    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:49.079773    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:49.085589    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:49.086156    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:49.086230    6196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:13:49.220934    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:13:49.221023    6196 buildroot.go:70] root file system type: tmpfs
	I0604 23:13:49.221180    6196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:13:49.221340    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:51.528476    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:51.529126    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:51.529254    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:54.297161    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:54.308433    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:54.315013    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:54.315628    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:54.315628    6196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:13:54.478292    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:13:54.478292    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:13:56.783950    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:13:56.789307    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:56.789307    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:13:59.494952    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:13:59.495012    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:13:59.500436    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:13:59.500977    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:13:59.501165    6196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 23:14:01.650321    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 23:14:01.650321    6196 machine.go:97] duration metric: took 49.1438276s to provisionDockerMachine
	I0604 23:14:01.650321    6196 client.go:171] duration metric: took 2m4.819558s to LocalClient.Create
	I0604 23:14:01.650321    6196 start.go:167] duration metric: took 2m4.819558s to libmachine.API.Create "multinode-022000"
	I0604 23:14:01.650321    6196 start.go:293] postStartSetup for "multinode-022000" (driver="hyperv")
	I0604 23:14:01.650321    6196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 23:14:01.666286    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 23:14:01.666286    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:03.937693    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:03.940635    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:03.940635    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:06.685204    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:06.685204    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:06.685501    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:06.803466    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1371404s)
	I0604 23:14:06.817959    6196 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 23:14:06.828555    6196 command_runner.go:130] > NAME=Buildroot
	I0604 23:14:06.828555    6196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0604 23:14:06.828555    6196 command_runner.go:130] > ID=buildroot
	I0604 23:14:06.828555    6196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0604 23:14:06.828555    6196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0604 23:14:06.828555    6196 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 23:14:06.828555    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 23:14:06.829193    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 23:14:06.830383    6196 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 23:14:06.830383    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 23:14:06.842483    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 23:14:06.863834    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 23:14:06.909377    6196 start.go:296] duration metric: took 5.2590154s for postStartSetup
	I0604 23:14:06.911704    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:09.239572    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:09.239572    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:09.252777    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:12.004165    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:12.004165    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:12.004340    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:14:12.007388    6196 start.go:128] duration metric: took 2m15.1810232s to createHost
	I0604 23:14:12.007388    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:14.340015    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:17.085744    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:17.085830    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:17.092956    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:14:17.092956    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:14:17.093535    6196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 23:14:17.223803    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717542857.223378729
	
	I0604 23:14:17.223951    6196 fix.go:216] guest clock: 1717542857.223378729
	I0604 23:14:17.223951    6196 fix.go:229] Guest: 2024-06-04 23:14:17.223378729 +0000 UTC Remote: 2024-06-04 23:14:12.0073882 +0000 UTC m=+141.244022401 (delta=5.215990529s)
	I0604 23:14:17.224064    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:19.483513    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:19.483513    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:19.498605    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:22.211109    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:22.216211    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:22.225606    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:14:22.226847    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.128.97 22 <nil> <nil>}
	I0604 23:14:22.226847    6196 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717542857
	I0604 23:14:22.366649    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:14:17 UTC 2024
	
	I0604 23:14:22.366722    6196 fix.go:236] clock set: Tue Jun  4 23:14:17 UTC 2024
	 (err=<nil>)
	I0604 23:14:22.366722    6196 start.go:83] releasing machines lock for "multinode-022000", held for 2m25.5404854s
	I0604 23:14:22.366722    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:24.680421    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:24.680495    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:24.680495    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:27.427914    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:27.427914    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:27.431897    6196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 23:14:27.432434    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:27.441472    6196 ssh_runner.go:195] Run: cat /version.json
	I0604 23:14:27.441472    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:14:29.791546    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:29.791546    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:29.791840    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:29.819483    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:14:29.819606    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:29.819683    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:14:32.605660    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:32.605660    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:32.619034    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:32.655696    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:14:32.655696    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:14:32.656464    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:14:32.817403    6196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0604 23:14:32.817403    6196 command_runner.go:130] > {"iso_version": "v1.33.1-1717518792-19024", "kicbase_version": "v0.0.44-1717064182-18993", "minikube_version": "v1.33.1", "commit": "8ad41152cc14078867a3ba7f5e3c263f5bd90a46"}
	I0604 23:14:32.817403    6196 ssh_runner.go:235] Completed: cat /version.json: (5.3758894s)
	I0604 23:14:32.817403    6196 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3854644s)
	I0604 23:14:32.832048    6196 ssh_runner.go:195] Run: systemctl --version
	I0604 23:14:32.842197    6196 command_runner.go:130] > systemd 252 (252)
	I0604 23:14:32.842498    6196 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0604 23:14:32.856792    6196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 23:14:32.865996    6196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0604 23:14:32.866374    6196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 23:14:32.879524    6196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 23:14:32.902950    6196 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0604 23:14:32.902950    6196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 23:14:32.902950    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:14:32.902950    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:14:32.946397    6196 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0604 23:14:32.961318    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 23:14:33.000133    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 23:14:33.020453    6196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 23:14:33.032247    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 23:14:33.066306    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:14:33.107770    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 23:14:33.142667    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:14:33.180576    6196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 23:14:33.216304    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 23:14:33.250091    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 23:14:33.290076    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 23:14:33.324248    6196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 23:14:33.345059    6196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0604 23:14:33.356958    6196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 23:14:33.394050    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:33.604494    6196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 23:14:33.645054    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:14:33.660200    6196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 23:14:33.689981    6196 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0604 23:14:33.689981    6196 command_runner.go:130] > [Unit]
	I0604 23:14:33.689981    6196 command_runner.go:130] > Description=Docker Application Container Engine
	I0604 23:14:33.690092    6196 command_runner.go:130] > Documentation=https://docs.docker.com
	I0604 23:14:33.690092    6196 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0604 23:14:33.690092    6196 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0604 23:14:33.690129    6196 command_runner.go:130] > StartLimitBurst=3
	I0604 23:14:33.690129    6196 command_runner.go:130] > StartLimitIntervalSec=60
	I0604 23:14:33.690129    6196 command_runner.go:130] > [Service]
	I0604 23:14:33.690129    6196 command_runner.go:130] > Type=notify
	I0604 23:14:33.690129    6196 command_runner.go:130] > Restart=on-failure
	I0604 23:14:33.690129    6196 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0604 23:14:33.690129    6196 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0604 23:14:33.690129    6196 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0604 23:14:33.690129    6196 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0604 23:14:33.690129    6196 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0604 23:14:33.690129    6196 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0604 23:14:33.690251    6196 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0604 23:14:33.690251    6196 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0604 23:14:33.690251    6196 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0604 23:14:33.690251    6196 command_runner.go:130] > ExecStart=
	I0604 23:14:33.690304    6196 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0604 23:14:33.690349    6196 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0604 23:14:33.690349    6196 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0604 23:14:33.690388    6196 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0604 23:14:33.690388    6196 command_runner.go:130] > LimitNOFILE=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > LimitNPROC=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > LimitCORE=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0604 23:14:33.690421    6196 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0604 23:14:33.690421    6196 command_runner.go:130] > TasksMax=infinity
	I0604 23:14:33.690421    6196 command_runner.go:130] > TimeoutStartSec=0
	I0604 23:14:33.690421    6196 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0604 23:14:33.690421    6196 command_runner.go:130] > Delegate=yes
	I0604 23:14:33.690421    6196 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0604 23:14:33.690421    6196 command_runner.go:130] > KillMode=process
	I0604 23:14:33.690421    6196 command_runner.go:130] > [Install]
	I0604 23:14:33.690421    6196 command_runner.go:130] > WantedBy=multi-user.target
	I0604 23:14:33.705097    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:14:33.740650    6196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 23:14:33.796976    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:14:33.839772    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:14:33.875668    6196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 23:14:33.949072    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:14:33.982025    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:14:34.021163    6196 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0604 23:14:34.037053    6196 ssh_runner.go:195] Run: which cri-dockerd
	I0604 23:14:34.043322    6196 command_runner.go:130] > /usr/bin/cri-dockerd
	I0604 23:14:34.060157    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 23:14:34.087256    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 23:14:34.133391    6196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 23:14:34.352524    6196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 23:14:34.576338    6196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 23:14:34.576716    6196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 23:14:34.628206    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:34.830628    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:14:37.378641    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.547994s)
	I0604 23:14:37.394686    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 23:14:37.436697    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:14:37.478760    6196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 23:14:37.690904    6196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 23:14:37.903801    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:38.122302    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 23:14:38.174581    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:14:38.215469    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:38.441232    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 23:14:38.551485    6196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 23:14:38.565744    6196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 23:14:38.576662    6196 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0604 23:14:38.576662    6196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0604 23:14:38.576662    6196 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0604 23:14:38.576662    6196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0604 23:14:38.576662    6196 command_runner.go:130] > Access: 2024-06-04 23:14:38.466219427 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] > Modify: 2024-06-04 23:14:38.466219427 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] > Change: 2024-06-04 23:14:38.470219463 +0000
	I0604 23:14:38.576662    6196 command_runner.go:130] >  Birth: -
	I0604 23:14:38.576662    6196 start.go:562] Will wait 60s for crictl version
	I0604 23:14:38.589950    6196 ssh_runner.go:195] Run: which crictl
	I0604 23:14:38.592518    6196 command_runner.go:130] > /usr/bin/crictl
	I0604 23:14:38.608349    6196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 23:14:38.667046    6196 command_runner.go:130] > Version:  0.1.0
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeName:  docker
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeVersion:  26.1.3
	I0604 23:14:38.668624    6196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0604 23:14:38.668624    6196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 23:14:38.678541    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:14:38.710750    6196 command_runner.go:130] > 26.1.3
	I0604 23:14:38.720955    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:14:38.754644    6196 command_runner.go:130] > 26.1.3
	I0604 23:14:38.759490    6196 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 23:14:38.759490    6196 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 23:14:38.764343    6196 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 23:14:38.766488    6196 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 23:14:38.766488    6196 ip.go:210] interface addr: 172.20.128.1/20
	I0604 23:14:38.776971    6196 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 23:14:38.779153    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:14:38.806971    6196 kubeadm.go:877] updating cluster {Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0604 23:14:38.807164    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:14:38.818544    6196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 23:14:38.842682    6196 docker.go:685] Got preloaded images: 
	I0604 23:14:38.842682    6196 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0604 23:14:38.859382    6196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 23:14:38.880066    6196 command_runner.go:139] > {"Repositories":{}}
	I0604 23:14:38.892162    6196 ssh_runner.go:195] Run: which lz4
	I0604 23:14:38.898986    6196 command_runner.go:130] > /usr/bin/lz4
	I0604 23:14:38.898986    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0604 23:14:38.914435    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0604 23:14:38.922899    6196 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 23:14:38.922899    6196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0604 23:14:38.922899    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0604 23:14:41.268355    6196 docker.go:649] duration metric: took 2.3693502s to copy over tarball
	I0604 23:14:41.282742    6196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0604 23:14:49.856117    6196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5733096s)
	I0604 23:14:49.856192    6196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0604 23:14:49.918324    6196 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0604 23:14:49.941818    6196 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0604 23:14:49.942046    6196 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0604 23:14:49.987611    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:50.219421    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:14:53.051938    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.8324949s)
	I0604 23:14:53.065110    6196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0604 23:14:53.100033    6196 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0604 23:14:53.100033    6196 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 23:14:53.100033    6196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0604 23:14:53.100033    6196 cache_images.go:84] Images are preloaded, skipping loading
	I0604 23:14:53.100033    6196 kubeadm.go:928] updating node { 172.20.128.97 8443 v1.30.1 docker true true} ...
	I0604 23:14:53.100033    6196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-022000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.128.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 23:14:53.111873    6196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0604 23:14:53.151882    6196 command_runner.go:130] > cgroupfs
	I0604 23:14:53.152255    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:14:53.152255    6196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 23:14:53.152255    6196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0604 23:14:53.152255    6196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.128.97 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022000 NodeName:multinode-022000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.128.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.128.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0604 23:14:53.153081    6196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.128.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-022000"
	  kubeletExtraArgs:
	    node-ip: 172.20.128.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.128.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0604 23:14:53.168653    6196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubeadm
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubectl
	I0604 23:14:53.187974    6196 command_runner.go:130] > kubelet
	I0604 23:14:53.187974    6196 binaries.go:44] Found k8s binaries, skipping transfer
	I0604 23:14:53.203850    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0604 23:14:53.229310    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0604 23:14:53.266279    6196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 23:14:53.299520    6196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0604 23:14:53.346531    6196 ssh_runner.go:195] Run: grep 172.20.128.97	control-plane.minikube.internal$ /etc/hosts
	I0604 23:14:53.354385    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.128.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:14:53.393999    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:14:53.601151    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:14:53.636447    6196 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000 for IP: 172.20.128.97
	I0604 23:14:53.636447    6196 certs.go:194] generating shared ca certs ...
	I0604 23:14:53.636447    6196 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:53.637372    6196 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 23:14:53.637556    6196 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 23:14:53.637556    6196 certs.go:256] generating profile certs ...
	I0604 23:14:53.638304    6196 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key
	I0604 23:14:53.638304    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt with IP's: []
	I0604 23:14:54.346251    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt ...
	I0604 23:14:54.346251    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.crt: {Name:mk15651533d2efea0de6b736ab8260c3beb97c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.351244    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key ...
	I0604 23:14:54.351244    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\client.key: {Name:mkbf57425a5409edb8a1d018ad39981898254d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.353262    6196 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba
	I0604 23:14:54.353262    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.128.97]
	I0604 23:14:54.907956    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba ...
	I0604 23:14:54.907956    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba: {Name:mk3371512a998263025c8a2ad881a0c7ecef2f88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.909152    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba ...
	I0604 23:14:54.909152    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba: {Name:mk18ef8c9344444a9f2801dc94bc33a4bf8c1ce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:54.910545    6196 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt.bbd58bba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt
	I0604 23:14:54.918367    6196 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key.bbd58bba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key
	I0604 23:14:54.926626    6196 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key
	I0604 23:14:54.926626    6196 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt with IP's: []
	I0604 23:14:55.150107    6196 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt ...
	I0604 23:14:55.150107    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt: {Name:mkb46a357200a337890a4d66bfd25e7283ff83ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:55.160389    6196 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key ...
	I0604 23:14:55.160389    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key: {Name:mk1a03424a8e09d3e0f3edd9d29dfdb81ce7a4d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:14:55.161449    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 23:14:55.162511    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 23:14:55.162793    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 23:14:55.163053    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 23:14:55.163053    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0604 23:14:55.163601    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0604 23:14:55.163735    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0604 23:14:55.171052    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0604 23:14:55.173838    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 23:14:55.173896    6196 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 23:14:55.173896    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 23:14:55.173896    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 23:14:55.174753    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 23:14:55.175018    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 23:14:55.175018    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 23:14:55.175753    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.176087    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 23:14:55.176453    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.177703    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 23:14:55.238880    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 23:14:55.292377    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 23:14:55.337695    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 23:14:55.401628    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0604 23:14:55.455861    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0604 23:14:55.514523    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0604 23:14:55.564820    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0604 23:14:55.613750    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 23:14:55.664171    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 23:14:55.719186    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 23:14:55.772767    6196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0604 23:14:55.820061    6196 ssh_runner.go:195] Run: openssl version
	I0604 23:14:55.830151    6196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0604 23:14:55.843407    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 23:14:55.879153    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.887250    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.887250    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.898424    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:14:55.908410    6196 command_runner.go:130] > b5213941
	I0604 23:14:55.920873    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 23:14:55.956538    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 23:14:55.988004    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.992914    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:14:55.992914    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:14:56.008987    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 23:14:56.018082    6196 command_runner.go:130] > 51391683
	I0604 23:14:56.032589    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 23:14:56.069759    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 23:14:56.102318    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.111789    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.112070    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.123319    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 23:14:56.133277    6196 command_runner.go:130] > 3ec20f2e
	I0604 23:14:56.149033    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 23:14:56.184488    6196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 23:14:56.187330    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:14:56.190634    6196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:14:56.191049    6196 kubeadm.go:391] StartCluster: {Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:14:56.200448    6196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0604 23:14:56.239782    6196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0604 23:14:56.258881    6196 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0604 23:14:56.271246    6196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0604 23:14:56.305672    6196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0604 23:14:56.325913    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0604 23:14:56.326199    6196 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 23:14:56.326565    6196 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0604 23:14:56.326565    6196 kubeadm.go:156] found existing configuration files:
	
	I0604 23:14:56.338027    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0604 23:14:56.360734    6196 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 23:14:56.366350    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0604 23:14:56.380163    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0604 23:14:56.412300    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0604 23:14:56.427161    6196 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 23:14:56.433172    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0604 23:14:56.445004    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0604 23:14:56.479418    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0604 23:14:56.491117    6196 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 23:14:56.491117    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0604 23:14:56.512442    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0604 23:14:56.550482    6196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0604 23:14:56.569714    6196 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 23:14:56.570620    6196 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0604 23:14:56.583476    6196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0604 23:14:56.604447    6196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0604 23:14:57.057002    6196 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:14:57.057099    6196 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:15:11.814254    6196 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0604 23:15:11.814254    6196 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0604 23:15:11.814254    6196 command_runner.go:130] > [preflight] Running pre-flight checks
	I0604 23:15:11.814254    6196 kubeadm.go:309] [preflight] Running pre-flight checks
	I0604 23:15:11.814254    6196 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 23:15:11.814254    6196 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0604 23:15:11.814879    6196 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 23:15:11.814879    6196 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0604 23:15:11.815027    6196 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 23:15:11.815147    6196 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0604 23:15:11.815499    6196 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 23:15:11.815499    6196 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0604 23:15:11.818530    6196 out.go:204]   - Generating certificates and keys ...
	I0604 23:15:11.818879    6196 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0604 23:15:11.818958    6196 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0604 23:15:11.819124    6196 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0604 23:15:11.819124    6196 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0604 23:15:11.819669    6196 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0604 23:15:11.819669    6196 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0604 23:15:11.819815    6196 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0604 23:15:11.819906    6196 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0604 23:15:11.820105    6196 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0604 23:15:11.820105    6196 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0604 23:15:11.820385    6196 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0604 23:15:11.820455    6196 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0604 23:15:11.820455    6196 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.820455    6196 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-022000] and IPs [172.20.128.97 127.0.0.1 ::1]
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0604 23:15:11.821374    6196 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0604 23:15:11.821374    6196 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0604 23:15:11.821906    6196 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 23:15:11.821906    6196 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0604 23:15:11.822008    6196 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0604 23:15:11.822044    6196 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0604 23:15:11.822044    6196 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 23:15:11.822612    6196 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0604 23:15:11.822885    6196 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 23:15:11.822885    6196 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0604 23:15:11.825781    6196 out.go:204]   - Booting up control plane ...
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0604 23:15:11.826022    6196 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 23:15:11.826022    6196 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0604 23:15:11.826613    6196 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:15:11.826613    6196 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:15:11.826847    6196 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:15:11.826847    6196 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:15:11.826847    6196 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0604 23:15:11.826847    6196 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0604 23:15:11.827304    6196 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 23:15:11.827304    6196 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0604 23:15:11.827304    6196 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.502489831s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.502489831s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0604 23:15:11.827304    6196 command_runner.go:130] > [api-check] The API server is healthy after 7.50349621s
	I0604 23:15:11.827304    6196 kubeadm.go:309] [api-check] The API server is healthy after 7.50349621s
	I0604 23:15:11.828073    6196 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 23:15:11.828129    6196 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0604 23:15:11.828213    6196 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 23:15:11.828213    6196 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0604 23:15:11.828213    6196 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0604 23:15:11.828213    6196 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0604 23:15:11.828858    6196 kubeadm.go:309] [mark-control-plane] Marking the node multinode-022000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 23:15:11.828858    6196 command_runner.go:130] > [mark-control-plane] Marking the node multinode-022000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0604 23:15:11.828858    6196 kubeadm.go:309] [bootstrap-token] Using token: fs2z3x.tlj9242qgak2cvhr
	I0604 23:15:11.828858    6196 command_runner.go:130] > [bootstrap-token] Using token: fs2z3x.tlj9242qgak2cvhr
	I0604 23:15:11.831396    6196 out.go:204]   - Configuring RBAC rules ...
	I0604 23:15:11.834649    6196 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 23:15:11.834649    6196 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0604 23:15:11.834649    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 23:15:11.834649    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0604 23:15:11.835214    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 23:15:11.835214    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0604 23:15:11.835409    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 23:15:11.835409    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0604 23:15:11.835409    6196 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 23:15:11.835949    6196 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0604 23:15:11.835996    6196 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 23:15:11.835996    6196 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0604 23:15:11.835996    6196 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 23:15:11.835996    6196 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0604 23:15:11.835996    6196 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0604 23:15:11.835996    6196 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0604 23:15:11.836683    6196 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0604 23:15:11.836683    6196 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0604 23:15:11.836683    6196 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0604 23:15:11.836683    6196 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0604 23:15:11.836683    6196 kubeadm.go:309] 
	I0604 23:15:11.836683    6196 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0604 23:15:11.837225    6196 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0604 23:15:11.837273    6196 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 23:15:11.837273    6196 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0604 23:15:11.837382    6196 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 23:15:11.837382    6196 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0604 23:15:11.837382    6196 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 23:15:11.837382    6196 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0604 23:15:11.837382    6196 kubeadm.go:309] 
	I0604 23:15:11.837382    6196 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0604 23:15:11.837964    6196 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0604 23:15:11.838009    6196 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 23:15:11.838009    6196 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0604 23:15:11.838009    6196 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 23:15:11.838009    6196 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0604 23:15:11.838009    6196 kubeadm.go:309] 
	I0604 23:15:11.838009    6196 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0604 23:15:11.838542    6196 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0604 23:15:11.838693    6196 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0604 23:15:11.838737    6196 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0604 23:15:11.838737    6196 kubeadm.go:309] 
	I0604 23:15:11.838737    6196 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.838737    6196 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839076    6196 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 23:15:11.839076    6196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 \
	I0604 23:15:11.839076    6196 kubeadm.go:309] 	--control-plane 
	I0604 23:15:11.839076    6196 command_runner.go:130] > 	--control-plane 
	I0604 23:15:11.839076    6196 kubeadm.go:309] 
	I0604 23:15:11.839355    6196 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0604 23:15:11.839355    6196 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0604 23:15:11.839355    6196 kubeadm.go:309] 
	I0604 23:15:11.839355    6196 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839355    6196 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fs2z3x.tlj9242qgak2cvhr \
	I0604 23:15:11.839355    6196 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:15:11.839355    6196 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:15:11.839355    6196 cni.go:84] Creating CNI manager for ""
	I0604 23:15:11.839355    6196 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0604 23:15:11.840623    6196 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0604 23:15:11.859991    6196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0604 23:15:11.868798    6196 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0604 23:15:11.868798    6196 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0604 23:15:11.868798    6196 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0604 23:15:11.868798    6196 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0604 23:15:11.868798    6196 command_runner.go:130] > Access: 2024-06-04 23:13:06.646457700 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] > Modify: 2024-06-04 20:55:58.000000000 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] > Change: 2024-06-04 23:12:58.070000000 +0000
	I0604 23:15:11.868798    6196 command_runner.go:130] >  Birth: -
	I0604 23:15:11.868798    6196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0604 23:15:11.868798    6196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0604 23:15:11.918092    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0604 23:15:12.354092    6196 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > serviceaccount/kindnet created
	I0604 23:15:12.354218    6196 command_runner.go:130] > daemonset.apps/kindnet created
	I0604 23:15:12.354218    6196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0604 23:15:12.374219    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:12.375265    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-022000 minikube.k8s.io/updated_at=2024_06_04T23_15_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=multinode-022000 minikube.k8s.io/primary=true
	I0604 23:15:12.380446    6196 command_runner.go:130] > -16
	I0604 23:15:12.380446    6196 ops.go:34] apiserver oom_adj: -16
	I0604 23:15:12.584305    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0604 23:15:12.589832    6196 command_runner.go:130] > node/multinode-022000 labeled
	I0604 23:15:12.599351    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:12.742639    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:13.115742    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:13.236723    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:13.618293    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:13.737240    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:14.113848    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:14.221272    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:14.600491    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:14.708338    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:15.104814    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:15.203342    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:15.615158    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:15.720265    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:16.096495    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:16.213033    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:16.597273    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:16.712009    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:17.109214    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:17.223941    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:17.608086    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:17.740862    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:18.101412    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:18.209689    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:18.619010    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:18.730047    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:19.108983    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:19.223667    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:19.612729    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:19.737656    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:20.106209    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:20.217605    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:20.599134    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:20.709610    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:21.106521    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:21.227066    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:21.615437    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:21.728309    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:22.105319    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:22.214620    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:22.598134    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:22.735396    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:23.115964    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:23.225821    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:23.606058    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:23.730992    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:24.105458    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:24.234573    6196 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0604 23:15:24.617412    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0604 23:15:24.768031    6196 command_runner.go:130] > NAME      SECRETS   AGE
	I0604 23:15:24.768077    6196 command_runner.go:130] > default   0         0s
	I0604 23:15:24.768358    6196 kubeadm.go:1107] duration metric: took 12.4140455s to wait for elevateKubeSystemPrivileges
	W0604 23:15:24.768358    6196 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0604 23:15:24.768358    6196 kubeadm.go:393] duration metric: took 28.5770916s to StartCluster
	I0604 23:15:24.768358    6196 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:15:24.768358    6196 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:24.770670    6196 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:15:24.772442    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0604 23:15:24.772442    6196 start.go:234] Will wait 6m0s for node &{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0604 23:15:24.772442    6196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0604 23:15:24.777940    6196 out.go:177] * Verifying Kubernetes components...
	I0604 23:15:24.772442    6196 addons.go:69] Setting storage-provisioner=true in profile "multinode-022000"
	I0604 23:15:24.772442    6196 addons.go:69] Setting default-storageclass=true in profile "multinode-022000"
	I0604 23:15:24.773172    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:24.780892    6196 addons.go:234] Setting addon storage-provisioner=true in "multinode-022000"
	I0604 23:15:24.780959    6196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-022000"
	I0604 23:15:24.781051    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:15:24.781106    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:24.783678    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:24.798420    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:15:25.040504    6196 command_runner.go:130] > apiVersion: v1
	I0604 23:15:25.040504    6196 command_runner.go:130] > data:
	I0604 23:15:25.040504    6196 command_runner.go:130] >   Corefile: |
	I0604 23:15:25.040504    6196 command_runner.go:130] >     .:53 {
	I0604 23:15:25.040504    6196 command_runner.go:130] >         errors
	I0604 23:15:25.040504    6196 command_runner.go:130] >         health {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            lameduck 5s
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         ready
	I0604 23:15:25.040504    6196 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            pods insecure
	I0604 23:15:25.040504    6196 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0604 23:15:25.040504    6196 command_runner.go:130] >            ttl 30
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         prometheus :9153
	I0604 23:15:25.040504    6196 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0604 23:15:25.040504    6196 command_runner.go:130] >            max_concurrent 1000
	I0604 23:15:25.040504    6196 command_runner.go:130] >         }
	I0604 23:15:25.040504    6196 command_runner.go:130] >         cache 30
	I0604 23:15:25.040504    6196 command_runner.go:130] >         loop
	I0604 23:15:25.040504    6196 command_runner.go:130] >         reload
	I0604 23:15:25.040504    6196 command_runner.go:130] >         loadbalance
	I0604 23:15:25.040504    6196 command_runner.go:130] >     }
	I0604 23:15:25.040504    6196 command_runner.go:130] > kind: ConfigMap
	I0604 23:15:25.040504    6196 command_runner.go:130] > metadata:
	I0604 23:15:25.040504    6196 command_runner.go:130] >   creationTimestamp: "2024-06-04T23:15:11Z"
	I0604 23:15:25.040504    6196 command_runner.go:130] >   name: coredns
	I0604 23:15:25.040504    6196 command_runner.go:130] >   namespace: kube-system
	I0604 23:15:25.040504    6196 command_runner.go:130] >   resourceVersion: "231"
	I0604 23:15:25.040504    6196 command_runner.go:130] >   uid: 76c64db5-87c5-4704-a57d-c416baff3d22
	I0604 23:15:25.040504    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0604 23:15:25.179821    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:15:25.658423    6196 command_runner.go:130] > configmap/coredns replaced
	I0604 23:15:25.658669    6196 start.go:946] {"host.minikube.internal": 172.20.128.1} host record injected into CoreDNS's ConfigMap
	I0604 23:15:25.660332    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:25.660730    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:25.661992    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:25.662206    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:25.664409    6196 cert_rotation.go:137] Starting client certificate rotation controller
	I0604 23:15:25.665126    6196 node_ready.go:35] waiting up to 6m0s for node "multinode-022000" to be "Ready" ...
	I0604 23:15:25.665505    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:25.665505    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:25.665564    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.665564    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.665564    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.665657    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.665690    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.665690    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.696775    6196 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0604 23:15:25.696822    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.696822    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.696822    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.696822    6196 round_trippers.go:580]     Audit-Id: fecd57eb-5ddb-4f78-be51-26c78c9d6fca
	I0604 23:15:25.696822    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"360","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:25.697815    6196 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0604 23:15:25.697815    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.697815    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Audit-Id: 10659bc8-98a8-48f8-8eb3-112d0ab2bdbc
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.697815    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.697815    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.697815    6196 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"360","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:25.700249    6196 round_trippers.go:463] PUT https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:25.700249    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:25.700249    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:25.700249    6196 round_trippers.go:473]     Content-Type: application/json
	I0604 23:15:25.700249    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:25.702872    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:25.726666    6196 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0604 23:15:25.730508    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:25 GMT
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Audit-Id: bf3a5c9c-dc7f-45fc-be3e-d7cb1b1e71dd
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:25.730508    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:25.730508    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:25.730508    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:25.730591    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:25.730659    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"362","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:26.184871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:26.184871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0604 23:15:26.185329    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.185373    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.184871    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.185373    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.185373    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.185373    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.194558    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:26.194653    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Content-Length: 291
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.194653    6196 round_trippers.go:580]     Audit-Id: b8a5f5e8-950d-4741-bff6-92a281d8d6f1
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.194607    6196 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.194708    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Audit-Id: 1e33f977-5257-4bce-9542-84d0b678abbd
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.194708    6196 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"76656786-0932-4c4f-959b-ce3529a09397","resourceVersion":"373","creationTimestamp":"2024-06-04T23:15:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0604 23:15:26.194708    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.194708    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.195089    6196 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-022000" context rescaled to 1 replicas
	I0604 23:15:26.195381    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:26.685182    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:26.685182    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:26.685182    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:26.685182    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:26.687737    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:26.689640    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:26.689640    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:26.689640    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:26 GMT
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Audit-Id: fc2180c8-f4c7-4384-80a0-801bdad36980
	I0604 23:15:26.689640    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:26.695210    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.170161    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:27.170161    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:27.170258    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:27.170258    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:27.170580    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:27.174291    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Audit-Id: 9d95ed19-1620-4262-8deb-949720e02ae9
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:27.174291    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:27.174291    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:27.174291    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:27 GMT
	I0604 23:15:27.174665    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:27.272369    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:27.298589    6196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0604 23:15:27.288756    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:15:27.304601    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:15:27.321291    6196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 23:15:27.321350    6196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0604 23:15:27.321350    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:27.321350    6196 addons.go:234] Setting addon default-storageclass=true in "multinode-022000"
	I0604 23:15:27.321350    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:15:27.323373    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:27.683848    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:27.683848    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:27.683848    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:27.683848    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:27.700642    6196 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 23:15:27.700642    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:27.703249    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:27.703249    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:27 GMT
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Audit-Id: 3b2abc18-c863-4d70-a1e2-3f0ab41e3309
	I0604 23:15:27.703249    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:27.703480    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:27.703480    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:28.181012    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:28.181012    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:28.181117    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:28.181117    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:28.184827    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:28.184899    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Audit-Id: 9674cc69-74bf-4b69-a88f-e9b08cab16e2
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:28.184966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:28.184966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:28.184966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:28.185023    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:28 GMT
	I0604 23:15:28.185515    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:28.668128    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:28.668128    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:28.668128    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:28.668128    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:28.673499    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:28.673499    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:28.673499    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:28.673616    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:28.673616    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:28.673616    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:28.673616    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:28 GMT
	I0604 23:15:28.673672    6196 round_trippers.go:580]     Audit-Id: ba57bb6c-71db-4f10-a75f-cdbe838a378c
	I0604 23:15:28.673915    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.176396    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:29.176396    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:29.176396    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:29.176396    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:29.177435    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:29.177435    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:29.180239    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:29.180239    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:29 GMT
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Audit-Id: c2350ca2-ebc7-4b8d-a3ee-71aea592ceab
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:29.180239    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:29.180385    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.670508    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:29.670590    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:29.670654    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:29.670654    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:29.674810    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:15:29.676412    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:29.676412    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:29.676412    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:29 GMT
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Audit-Id: 34b3ef29-0923-4520-a965-412cc1ffcdad
	I0604 23:15:29.676412    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:29.677106    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:29.817347    6196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0604 23:15:29.817347    6196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0604 23:15:29.817347    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:15:29.868879    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:29.868971    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:29.868971    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:15:30.177524    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:30.177524    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:30.177524    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:30.177524    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:30.178787    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:30.178787    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:30.178787    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:30.178787    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:30 GMT
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Audit-Id: c7a43ddb-f866-4847-9f80-69decc4fa67e
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:30.178787    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:30.182685    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:30.183052    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:30.666783    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:30.666903    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:30.666903    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:30.666903    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:30.668745    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:30.671245    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Audit-Id: fe6fcf36-59f5-451d-89e5-27e837a4b1c3
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:30.671245    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:30.671245    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:30.671245    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:30 GMT
	I0604 23:15:30.671703    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:31.183756    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:31.183756    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:31.183756    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:31.183756    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:31.186158    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:31.187906    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Audit-Id: a24a5469-669b-4e26-affe-75409477066e
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:31.187906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:31.187906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:31.187906    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:31 GMT
	I0604 23:15:31.187906    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:31.671969    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:31.672060    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:31.672060    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:31.672060    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:31.672592    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:31.676096    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Audit-Id: 28426735-6074-4bb6-ab13-8ddbe70565cd
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:31.676096    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:31.676210    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:31.676210    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:31.676210    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:31 GMT
	I0604 23:15:31.676497    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.167135    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:32.167202    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:32.167268    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:32.167268    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:32.217983    6196 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0604 23:15:32.217983    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:32.217983    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:32.229042    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:32.229042    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:32 GMT
	I0604 23:15:32.229042    6196 round_trippers.go:580]     Audit-Id: a488267f-4da8-4409-9866-564da63212ff
	I0604 23:15:32.229929    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.229993    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:32.297741    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:15:32.300327    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:32.300474    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:15:32.676469    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:32.676533    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:32.676587    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:32.676587    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:32.682072    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:32.682162    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:32.682162    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:32.682162    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:32 GMT
	I0604 23:15:32.682230    6196 round_trippers.go:580]     Audit-Id: a66248f9-8456-4ce5-986c-63863de0fa47
	I0604 23:15:32.682274    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:32.682274    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:32.682356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:32.682968    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:32.735581    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:15:32.735581    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:32.745280    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:15:32.938349    6196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0604 23:15:33.172154    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:33.172154    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:33.172154    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:33.172154    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:33.174411    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:33.174411    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:33.174411    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:33.174411    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:33.174411    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:33 GMT
	I0604 23:15:33.174411    6196 round_trippers.go:580]     Audit-Id: 24557e18-305b-4ae7-990f-e29872d6cc6b
	I0604 23:15:33.176193    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:33.176311    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:33.177052    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:33.507857    6196 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0604 23:15:33.507857    6196 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0604 23:15:33.507857    6196 command_runner.go:130] > pod/storage-provisioner created
	I0604 23:15:33.669326    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:33.669326    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:33.669326    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:33.669326    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:33.670449    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:33.673954    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:33.674141    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:33.674141    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:33 GMT
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Audit-Id: 02e1a508-d2b5-4f83-b815-bc1bed45b181
	I0604 23:15:33.674141    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:33.674141    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.178131    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:34.178131    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:34.178131    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:34.178131    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:34.182097    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:34.182097    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:34.182097    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:34.182097    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:34 GMT
	I0604 23:15:34.182097    6196 round_trippers.go:580]     Audit-Id: ab0268bc-25bc-4178-afd1-198402cb645c
	I0604 23:15:34.182801    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.672069    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:34.672069    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:34.672235    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:34.672235    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:34.674269    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:34.674269    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Audit-Id: f9c14f6b-bc24-42b6-b797-75b7f8fdca76
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:34.677711    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:34.677711    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:34.677711    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:34 GMT
	I0604 23:15:34.678581    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:34.679246    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:35.110799    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:15:35.110973    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:35.111264    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:15:35.168107    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:35.168408    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.168408    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.168408    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.174833    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:35.174833    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Audit-Id: a3a6b8e6-29ee-4764-b046-70d5825c34c6
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.174833    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.174833    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.174833    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.175488    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:35.251403    6196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0604 23:15:35.427697    6196 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0604 23:15:35.428838    6196 round_trippers.go:463] GET https://172.20.128.97:8443/apis/storage.k8s.io/v1/storageclasses
	I0604 23:15:35.428838    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.428838    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.428838    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.451007    6196 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0604 23:15:35.451007    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Content-Length: 1273
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Audit-Id: b85435c9-6871-4609-a178-07c9a4667f2f
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.451007    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.451007    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.451007    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.451165    6196 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0604 23:15:35.451881    6196 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 23:15:35.451969    6196 round_trippers.go:463] PUT https://172.20.128.97:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0604 23:15:35.451969    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.451969    6196 round_trippers.go:473]     Content-Type: application/json
	I0604 23:15:35.451969    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.452050    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.460575    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:15:35.460575    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.460575    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.460575    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Content-Length: 1220
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.460575    6196 round_trippers.go:580]     Audit-Id: 4cfaecfc-2b1f-4192-b517-8c51ed6423a9
	I0604 23:15:35.460575    6196 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ab40f00e-25e9-4df4-9ba0-6035df5c3f6e","resourceVersion":"400","creationTimestamp":"2024-06-04T23:15:35Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-04T23:15:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0604 23:15:35.464767    6196 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0604 23:15:35.470476    6196 addons.go:510] duration metric: took 10.6979529s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0604 23:15:35.677616    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:35.677616    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:35.678011    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:35.678011    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:35.678460    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:35.678460    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:35.678460    6196 round_trippers.go:580]     Audit-Id: e12f0207-4c05-4977-a5ec-4cd68a322b4a
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:35.682685    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:35.682685    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:35.682685    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:35 GMT
	I0604 23:15:35.682971    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:36.173350    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:36.173635    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:36.173694    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:36.173694    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:36.173918    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:36.173918    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:36.173918    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:36.173918    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:36 GMT
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Audit-Id: 447db061-39be-49da-be1f-ce2fc44cfa8f
	I0604 23:15:36.173918    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:36.179040    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:36.667591    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:36.667871    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:36.667871    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:36.667871    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:36.668652    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:36.671337    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:36 GMT
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Audit-Id: 0e8644cf-99bd-41fe-8558-fb4d4a616ba9
	I0604 23:15:36.671337    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:36.671402    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:36.671402    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:36.671402    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:36.671402    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:37.169282    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:37.169510    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:37.169618    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:37.169618    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:37.175953    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:37.176548    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:37.176548    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:37.176548    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:37 GMT
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Audit-Id: 60ea7881-6643-4f93-b578-92563307f922
	I0604 23:15:37.176548    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:37.176803    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:37.176803    6196 node_ready.go:53] node "multinode-022000" has status "Ready":"False"
	I0604 23:15:37.667649    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:37.667902    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:37.667902    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:37.667902    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:37.668745    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:37.672699    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:37.672699    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:37 GMT
	I0604 23:15:37.672699    6196 round_trippers.go:580]     Audit-Id: 00970b83-42c6-462c-8e52-8e2fe2f11f93
	I0604 23:15:37.672818    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:37.672818    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:37.672818    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:37.672818    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:37.673116    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:38.170362    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.170614    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.170614    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.170614    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.171327    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:38.171327    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Audit-Id: 63ac2993-0282-4679-aa80-4fbe9394f44b
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.171327    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.171327    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.171327    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.174749    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"344","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0604 23:15:38.679517    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.679517    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.679517    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.679517    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.682341    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:38.688997    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.688997    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.688997    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Audit-Id: 896f0e2a-fc8f-4543-8d64-efdff3824406
	I0604 23:15:38.688997    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.692052    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:38.693356    6196 node_ready.go:49] node "multinode-022000" has status "Ready":"True"
	I0604 23:15:38.694027    6196 node_ready.go:38] duration metric: took 13.028057s for node "multinode-022000" to be "Ready" ...
	I0604 23:15:38.694027    6196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:15:38.694027    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:38.694027    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.694027    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.694027    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.706648    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:15:38.706721    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Audit-Id: 27217599-9af4-4c38-9e37-414a11907a0a
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.706721    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.706721    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.706721    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.707615    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"407"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"405","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52787 chars]
	I0604 23:15:38.711929    6196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:38.712520    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:38.712520    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.712520    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.712624    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.720639    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:15:38.721037    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.721037    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.721037    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Audit-Id: 16530089-405c-4389-8b97-02b10335a3a3
	I0604 23:15:38.721037    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.721037    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:38.722388    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:38.722438    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:38.722487    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:38.722558    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:38.733579    6196 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 23:15:38.733579    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:38.733793    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:38.733793    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:38 GMT
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Audit-Id: 98c01cbe-1d14-4b15-ae27-6dacd6a71484
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:38.733793    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:38.734153    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:39.220868    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:39.220868    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.220868    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.220868    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.221457    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.225296    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Audit-Id: d04e6ac8-76ea-4530-8ba4-fb317ffd1f9e
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.225296    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.225296    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.225296    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.225535    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:39.227043    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:39.227178    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.227178    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.227178    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.229494    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:39.229494    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.229494    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.229494    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.229494    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.229494    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.229999    6196 round_trippers.go:580]     Audit-Id: 172b72b1-bd20-4aa6-990e-0d2d3f550654
	I0604 23:15:39.229999    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.230089    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:39.713807    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:39.713881    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.713881    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.713913    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.714735    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.714735    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.714735    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.714735    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Audit-Id: 7989a4d1-8eef-4294-befc-640c8f4179da
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.714735    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.714735    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:39.718640    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:39.718770    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:39.718770    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:39.718770    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:39.718966    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:39.718966    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Audit-Id: 4a8e6722-b199-4de9-99d0-c8bbbff9564a
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:39.718966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:39.718966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:39.718966    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:39 GMT
	I0604 23:15:39.721515    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.219855    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:40.219971    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.219971    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.219971    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.227769    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:15:40.227769    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.227769    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.227769    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Audit-Id: 5cad6698-d572-4c20-a07b-3ba9567951a1
	I0604 23:15:40.227769    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.227769    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:40.228646    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:40.228776    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.228776    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.228776    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.231480    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:40.231480    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Audit-Id: 89624d5e-56f4-4ddc-b039-b89e88d82e48
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.231480    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.231480    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.231480    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.239198    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.729420    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:40.729420    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.729420    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.729420    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.731184    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:40.731184    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.731184    6196 round_trippers.go:580]     Audit-Id: 00a096e0-9a0e-4cfa-9931-e153adc6dbb2
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.736356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.736356    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.736356    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.736600    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"408","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0604 23:15:40.737408    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:40.737408    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:40.737408    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:40.737408    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:40.741127    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:40.745931    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:40.745931    6196 round_trippers.go:580]     Audit-Id: 047e5035-9de6-46ec-9816-2ae223985e89
	I0604 23:15:40.745931    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:40.746038    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:40.746038    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:40.746038    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:40.746038    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:40 GMT
	I0604 23:15:40.746284    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:40.746462    6196 pod_ready.go:102] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"False"
	I0604 23:15:41.225204    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:15:41.225204    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.225204    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.225280    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.225519    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.229758    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.229758    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.229758    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Audit-Id: 83c3e1d0-f6e3-4ee7-83fb-87a0cf8db857
	I0604 23:15:41.229758    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.229758    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0604 23:15:41.230829    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.230829    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.230905    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.230905    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.231083    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.231083    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Audit-Id: 6fb7d752-3294-447d-8c70-a37653a7a3a3
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.231083    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.231083    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.231083    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.234025    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.234149    6196 pod_ready.go:92] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.234149    6196 pod_ready.go:81] duration metric: took 2.5216677s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.234149    6196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.234690    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-022000
	I0604 23:15:41.234690    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.234690    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.234690    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.234958    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.234958    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.234958    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.234958    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Audit-Id: 477a939d-f5b8-481d-afe9-605ae0f3ce81
	I0604 23:15:41.234958    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.238207    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022000","namespace":"kube-system","uid":"cf5ce7db-ab12-4be8-9e44-317caab1adeb","resourceVersion":"386","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.128.97:2379","kubernetes.io/config.hash":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.mirror":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.seen":"2024-06-04T23:15:11.311330236Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0604 23:15:41.239163    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.239163    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.239163    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.239163    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.239966    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.239966    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.239966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Audit-Id: be82e54d-8305-49e7-9403-e76c8df0e4eb
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.242330    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.242330    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.242330    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.242330    6196 pod_ready.go:92] pod "etcd-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.242330    6196 pod_ready.go:81] duration metric: took 8.1808ms for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.242330    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.243068    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022000
	I0604 23:15:41.243127    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.243170    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.243192    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.244515    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:41.244515    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.244515    6196 round_trippers.go:580]     Audit-Id: 3ec69af6-eb51-4b16-b87c-315e3f3911cd
	I0604 23:15:41.244515    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.246298    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.246298    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.246298    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.246298    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.246565    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022000","namespace":"kube-system","uid":"a15ca283-cf36-4ce5-846a-37257524e217","resourceVersion":"385","creationTimestamp":"2024-06-04T23:15:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.128.97:8443","kubernetes.io/config.hash":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.mirror":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.seen":"2024-06-04T23:15:02.371587958Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0604 23:15:41.247272    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.247301    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.247353    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.247353    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.251275    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:15:41.251275    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Audit-Id: 5d57f454-e8f6-48e8-a938-68f5f87173c6
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.251275    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.251275    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.251275    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.251275    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.251917    6196 pod_ready.go:92] pod "kube-apiserver-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.251917    6196 pod_ready.go:81] duration metric: took 9.5877ms for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.251917    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.251917    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022000
	I0604 23:15:41.251917    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.251917    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.251917    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.257211    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:15:41.257211    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.257211    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.257211    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.257211    6196 round_trippers.go:580]     Audit-Id: be914605-3423-4f1c-8bb8-42e72021db83
	I0604 23:15:41.257211    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022000","namespace":"kube-system","uid":"2bb46405-19fa-4ca8-afd5-6d6224271444","resourceVersion":"382","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.mirror":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.seen":"2024-06-04T23:15:11.311327436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0604 23:15:41.257944    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.257944    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.257944    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.257944    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.259220    6196 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0604 23:15:41.259220    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.259220    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Audit-Id: 7cf6a7e2-64cd-4df7-a067-ceac68abb607
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.259220    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.259220    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.259220    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.261197    6196 pod_ready.go:92] pod "kube-controller-manager-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.261235    6196 pod_ready.go:81] duration metric: took 9.3174ms for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.261277    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.261380    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:15:41.261418    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.261418    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.261462    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.263854    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:41.263854    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.263854    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.263854    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Audit-Id: b958e67e-21f2-47f5-8372-987853ff9a10
	I0604 23:15:41.263854    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.263854    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pbmpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be","resourceVersion":"378","creationTimestamp":"2024-06-04T23:15:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0604 23:15:41.265404    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.265614    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.265614    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.265614    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.265923    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.265923    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.265923    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.265923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.268657    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Audit-Id: e0f5f02b-1813-482b-9efc-d9f8df0e9e26
	I0604 23:15:41.268657    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.269068    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.269358    6196 pod_ready.go:92] pod "kube-proxy-pbmpr" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.269358    6196 pod_ready.go:81] duration metric: took 8.0807ms for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.269358    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.430467    6196 request.go:629] Waited for 160.6738ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:15:41.430693    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:15:41.430693    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.430769    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.430769    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.431075    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.431075    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.431075    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.431075    6196 round_trippers.go:580]     Audit-Id: f650af1a-85cb-41b1-be2c-28c816fc42c9
	I0604 23:15:41.434966    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.434966    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.434966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.434966    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.435345    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022000","namespace":"kube-system","uid":"0453fac4-fec2-4a1f-80f7-c3192dae4ea5","resourceVersion":"384","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.mirror":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.seen":"2024-06-04T23:15:11.311328836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0604 23:15:41.636019    6196 request.go:629] Waited for 199.2011ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.636019    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:15:41.636019    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.636019    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.636019    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.636583    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.636583    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Audit-Id: 97a410fb-14da-4fd3-8b67-06b4c82c6da9
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.636583    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.636583    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.640353    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.640353    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.640500    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"404","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0604 23:15:41.640949    6196 pod_ready.go:92] pod "kube-scheduler-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:15:41.640949    6196 pod_ready.go:81] duration metric: took 371.5883ms for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:15:41.640949    6196 pod_ready.go:38] duration metric: took 2.9468996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:15:41.641107    6196 api_server.go:52] waiting for apiserver process to appear ...
	I0604 23:15:41.654079    6196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 23:15:41.682999    6196 command_runner.go:130] > 2011
	I0604 23:15:41.682999    6196 api_server.go:72] duration metric: took 16.910428s to wait for apiserver process to appear ...
	I0604 23:15:41.682999    6196 api_server.go:88] waiting for apiserver healthz status ...
	I0604 23:15:41.683099    6196 api_server.go:253] Checking apiserver healthz at https://172.20.128.97:8443/healthz ...
	I0604 23:15:41.689749    6196 api_server.go:279] https://172.20.128.97:8443/healthz returned 200:
	ok
	I0604 23:15:41.690816    6196 round_trippers.go:463] GET https://172.20.128.97:8443/version
	I0604 23:15:41.690816    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.690816    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.690816    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.693807    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:15:41.693807    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Content-Length: 263
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Audit-Id: c4f65590-ad40-49eb-9d5c-d075d8a9623e
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.693807    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.693807    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.693807    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.693807    6196 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0604 23:15:41.693807    6196 api_server.go:141] control plane version: v1.30.1
	I0604 23:15:41.694338    6196 api_server.go:131] duration metric: took 11.3391ms to wait for apiserver health ...
	I0604 23:15:41.694338    6196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0604 23:15:41.825588    6196 request.go:629] Waited for 130.8592ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:41.825588    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:41.825588    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:41.825588    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:41.825588    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:41.831583    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:41.831626    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:41.831626    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:41.831626    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:41 GMT
	I0604 23:15:41.831626    6196 round_trippers.go:580]     Audit-Id: a1dd8dad-c761-4f40-9c96-59a65ba9a574
	I0604 23:15:41.832904    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0604 23:15:41.835551    6196 system_pods.go:59] 8 kube-system pods found
	I0604 23:15:41.835551    6196 system_pods.go:61] "coredns-7db6d8ff4d-mlh9s" [15497b54-7964-47a8-9dc8-89c225f6b842] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "etcd-multinode-022000" [cf5ce7db-ab12-4be8-9e44-317caab1adeb] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kindnet-s279j" [68ac1199-4b19-4f5d-99d5-701006fac840] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-apiserver-multinode-022000" [a15ca283-cf36-4ce5-846a-37257524e217] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-controller-manager-multinode-022000" [2bb46405-19fa-4ca8-afd5-6d6224271444] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-proxy-pbmpr" [ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "kube-scheduler-multinode-022000" [0453fac4-fec2-4a1f-80f7-c3192dae4ea5] Running
	I0604 23:15:41.835551    6196 system_pods.go:61] "storage-provisioner" [b56880e3-c751-42af-b85d-0ce47f4415ee] Running
	I0604 23:15:41.835551    6196 system_pods.go:74] duration metric: took 141.2122ms to wait for pod list to return data ...
	I0604 23:15:41.835551    6196 default_sa.go:34] waiting for default service account to be created ...
	I0604 23:15:42.028219    6196 request.go:629] Waited for 192.5469ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/default/serviceaccounts
	I0604 23:15:42.028395    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/default/serviceaccounts
	I0604 23:15:42.028395    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.028395    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.028395    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.029214    6196 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0604 23:15:42.029214    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Audit-Id: 56108434-aa1f-4b19-a5e9-7b19c021ae7b
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.029214    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.029214    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Content-Length: 261
	I0604 23:15:42.029214    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.031901    6196 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6c5f1584-41ab-41e8-b2ea-c87ef904e212","resourceVersion":"319","creationTimestamp":"2024-06-04T23:15:24Z"}}]}
	I0604 23:15:42.031951    6196 default_sa.go:45] found service account: "default"
	I0604 23:15:42.031951    6196 default_sa.go:55] duration metric: took 196.3984ms for default service account to be created ...
	I0604 23:15:42.031951    6196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0604 23:15:42.248682    6196 request.go:629] Waited for 216.5522ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:42.248682    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:15:42.248902    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.248902    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.248902    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.260942    6196 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0604 23:15:42.260942    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Audit-Id: 4ec25d06-029e-405c-b93f-2477d94cadb9
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.260942    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.260942    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.260942    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.261540    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0604 23:15:42.264497    6196 system_pods.go:86] 8 kube-system pods found
	I0604 23:15:42.264497    6196 system_pods.go:89] "coredns-7db6d8ff4d-mlh9s" [15497b54-7964-47a8-9dc8-89c225f6b842] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "etcd-multinode-022000" [cf5ce7db-ab12-4be8-9e44-317caab1adeb] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kindnet-s279j" [68ac1199-4b19-4f5d-99d5-701006fac840] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-apiserver-multinode-022000" [a15ca283-cf36-4ce5-846a-37257524e217] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-controller-manager-multinode-022000" [2bb46405-19fa-4ca8-afd5-6d6224271444] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-proxy-pbmpr" [ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "kube-scheduler-multinode-022000" [0453fac4-fec2-4a1f-80f7-c3192dae4ea5] Running
	I0604 23:15:42.264497    6196 system_pods.go:89] "storage-provisioner" [b56880e3-c751-42af-b85d-0ce47f4415ee] Running
	I0604 23:15:42.264497    6196 system_pods.go:126] duration metric: took 232.5444ms to wait for k8s-apps to be running ...
	I0604 23:15:42.264497    6196 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 23:15:42.276665    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:15:42.304744    6196 system_svc.go:56] duration metric: took 40.1726ms WaitForService to wait for kubelet
	I0604 23:15:42.304744    6196 kubeadm.go:576] duration metric: took 17.532168s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:15:42.304744    6196 node_conditions.go:102] verifying NodePressure condition ...
	I0604 23:15:42.434079    6196 request.go:629] Waited for 129.3346ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes
	I0604 23:15:42.434637    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes
	I0604 23:15:42.434637    6196 round_trippers.go:469] Request Headers:
	I0604 23:15:42.434710    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:15:42.434710    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:15:42.442663    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:15:42.442710    6196 round_trippers.go:577] Response Headers:
	I0604 23:15:42.442710    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:15:42 GMT
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Audit-Id: 2b79b053-5719-4ee4-acfa-4dc4a6fbba03
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:15:42.442710    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:15:42.442710    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:15:42.442710    6196 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0604 23:15:42.443409    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:15:42.443458    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:15:42.443506    6196 node_conditions.go:105] duration metric: took 138.7615ms to run NodePressure ...
	I0604 23:15:42.443506    6196 start.go:240] waiting for startup goroutines ...
	I0604 23:15:42.443506    6196 start.go:245] waiting for cluster config update ...
	I0604 23:15:42.443506    6196 start.go:254] writing updated cluster config ...
	I0604 23:15:42.449133    6196 out.go:177] 
	I0604 23:15:42.451975    6196 config.go:182] Loaded profile config "ha-609500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:42.457667    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:15:42.457667    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:15:42.468902    6196 out.go:177] * Starting "multinode-022000-m02" worker node in "multinode-022000" cluster
	I0604 23:15:42.471464    6196 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:15:42.471464    6196 cache.go:56] Caching tarball of preloaded images
	I0604 23:15:42.472148    6196 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:15:42.472148    6196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:15:42.472148    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:15:42.474631    6196 start.go:360] acquireMachinesLock for multinode-022000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:15:42.475221    6196 start.go:364] duration metric: took 589.1µs to acquireMachinesLock for "multinode-022000-m02"
	I0604 23:15:42.475332    6196 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:15:42.475332    6196 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0604 23:15:42.481198    6196 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0604 23:15:42.481313    6196 start.go:159] libmachine.API.Create for "multinode-022000" (driver="hyperv")
	I0604 23:15:42.481313    6196 client.go:168] LocalClient.Create starting
	I0604 23:15:42.481966    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0604 23:15:42.482163    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:15:42.482253    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:15:42.482417    6196 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0604 23:15:42.482654    6196 main.go:141] libmachine: Decoding PEM data...
	I0604 23:15:42.482733    6196 main.go:141] libmachine: Parsing certificate...
	I0604 23:15:42.482805    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0604 23:15:44.514720    6196 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0604 23:15:44.514720    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:44.514890    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [stdout =====>] : False
	
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:46.366356    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:47.979901    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:15:52.107727    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:15:52.107727    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:52.121654    6196 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1717518792-19024-amd64.iso...
	I0604 23:15:52.647323    6196 main.go:141] libmachine: Creating SSH key...
	I0604 23:15:53.109523    6196 main.go:141] libmachine: Creating VM...
	I0604 23:15:53.109523    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0604 23:15:56.298140    6196 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0604 23:15:56.312029    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:56.312029    6196 main.go:141] libmachine: Using switch "Default Switch"
	I0604 23:15:56.312029    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [stdout =====>] : True
	
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:15:58.188643    6196 main.go:141] libmachine: Creating VHD
	I0604 23:15:58.188643    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0604 23:16:02.204658    6196 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5C41BDB5-62B8-44D6-88EC-43151DCA7638
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0604 23:16:02.204658    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:02.204658    6196 main.go:141] libmachine: Writing magic tar header
	I0604 23:16:02.204951    6196 main.go:141] libmachine: Writing SSH key tar header
	I0604 23:16:02.205729    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0604 23:16:05.558106    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:05.558106    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:05.572818    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd' -SizeBytes 20000MB
	I0604 23:16:08.284641    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:08.284743    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:08.284743    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0604 23:16:12.197880    6196 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-022000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0604 23:16:12.211692    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:12.211692    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-022000-m02 -DynamicMemoryEnabled $false
	I0604 23:16:14.661969    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:14.673712    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:14.673712    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-022000-m02 -Count 2
	I0604 23:16:17.033453    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:17.033453    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:17.047627    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\boot2docker.iso'
	I0604 23:16:19.864987    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:19.864987    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:19.877331    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-022000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\disk.vhd'
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:22.769698    6196 main.go:141] libmachine: Starting VM...
	I0604 23:16:22.769698    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000-m02
	I0604 23:16:26.159678    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:26.159812    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:26.159812    6196 main.go:141] libmachine: Waiting for host to start...
	I0604 23:16:26.159812    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:28.648730    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:28.649504    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:28.649597    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:31.408737    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:31.408737    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:32.413950    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:34.836840    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:34.836840    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:34.849790    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:37.591191    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:37.591191    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:38.597239    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:40.968166    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:40.981014    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:40.981086    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:43.768791    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:43.775649    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:44.777364    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:47.155930    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:49.936966    6196 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:16:49.936966    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:50.937853    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:53.403925    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:53.409223    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:53.409297    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:16:56.223975    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:16:56.236119    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:56.236119    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:16:58.565169    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:16:58.565169    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:16:58.565169    6196 machine.go:94] provisionDockerMachine start ...
	I0604 23:16:58.572380    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:00.877597    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:00.877597    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:00.890044    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:03.634831    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:03.634831    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:03.653098    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:03.654173    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:03.654243    6196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:17:03.779399    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:17:03.779484    6196 buildroot.go:166] provisioning hostname "multinode-022000-m02"
	I0604 23:17:03.779565    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:06.152838    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:06.152838    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:06.153007    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:08.976532    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:08.976532    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:08.993610    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:08.993728    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:08.993728    6196 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000-m02 && echo "multinode-022000-m02" | sudo tee /etc/hostname
	I0604 23:17:09.151794    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000-m02
	
	I0604 23:17:09.151794    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:11.460477    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:11.473267    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:11.473267    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:14.247697    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:14.247697    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:14.267409    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:14.268065    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:14.268065    6196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:17:14.420587    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:17:14.420649    6196 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:17:14.420649    6196 buildroot.go:174] setting up certificates
	I0604 23:17:14.420649    6196 provision.go:84] configureAuth start
	I0604 23:17:14.420747    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:16.735257    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:19.459399    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:19.459399    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:19.472388    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:21.778086    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:24.587963    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:24.587963    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:24.587963    6196 provision.go:143] copyHostCerts
	I0604 23:17:24.588863    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:17:24.589122    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:17:24.589259    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:17:24.589783    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:17:24.591006    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:17:24.591006    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:17:24.591006    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:17:24.591641    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:17:24.592249    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:17:24.592777    6196 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:17:24.592777    6196 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:17:24.592848    6196 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:17:24.593990    6196 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000-m02 san=[127.0.0.1 172.20.130.221 localhost minikube multinode-022000-m02]
	I0604 23:17:25.113078    6196 provision.go:177] copyRemoteCerts
	I0604 23:17:25.129161    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:17:25.129161    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:27.535921    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:30.376005    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:30.376005    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:30.390275    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:17:30.502796    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3735929s)
	I0604 23:17:30.502900    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:17:30.503336    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:17:30.556821    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:17:30.556939    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0604 23:17:30.609978    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:17:30.610262    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0604 23:17:30.661820    6196 provision.go:87] duration metric: took 16.2410475s to configureAuth
	I0604 23:17:30.661820    6196 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:17:30.662598    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:17:30.662598    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:33.003698    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:33.003698    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:33.017851    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:35.851680    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:35.851680    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:35.871644    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:35.871950    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:35.871950    6196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:17:36.008288    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:17:36.008397    6196 buildroot.go:70] root file system type: tmpfs
	I0604 23:17:36.008529    6196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:17:36.008645    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:38.329374    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:38.332024    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:38.332024    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:41.127630    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:41.127630    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:41.134742    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:41.134742    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:41.135833    6196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.128.97"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:17:41.299445    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.128.97
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:17:41.299556    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:43.635240    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:43.635240    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:43.647988    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:46.456880    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:46.469870    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:46.476966    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:17:46.477091    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:17:46.477091    6196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0604 23:17:48.661850    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0604 23:17:48.661850    6196 machine.go:97] duration metric: took 50.0962966s to provisionDockerMachine
	I0604 23:17:48.661850    6196 client.go:171] duration metric: took 2m6.1795717s to LocalClient.Create
	I0604 23:17:48.661850    6196 start.go:167] duration metric: took 2m6.1795717s to libmachine.API.Create "multinode-022000"
	I0604 23:17:48.661850    6196 start.go:293] postStartSetup for "multinode-022000-m02" (driver="hyperv")
	I0604 23:17:48.661850    6196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0604 23:17:48.677250    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0604 23:17:48.677779    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:51.031425    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:53.813598    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:53.827297    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:53.827933    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:17:53.939077    6196 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2611451s)
	I0604 23:17:53.960835    6196 ssh_runner.go:195] Run: cat /etc/os-release
	I0604 23:17:53.969287    6196 command_runner.go:130] > NAME=Buildroot
	I0604 23:17:53.969393    6196 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0604 23:17:53.969393    6196 command_runner.go:130] > ID=buildroot
	I0604 23:17:53.969393    6196 command_runner.go:130] > VERSION_ID=2023.02.9
	I0604 23:17:53.969495    6196 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0604 23:17:53.969552    6196 info.go:137] Remote host: Buildroot 2023.02.9
	I0604 23:17:53.969646    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0604 23:17:53.969926    6196 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0604 23:17:53.970613    6196 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> 140642.pem in /etc/ssl/certs
	I0604 23:17:53.970613    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /etc/ssl/certs/140642.pem
	I0604 23:17:53.984188    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0604 23:17:54.005090    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /etc/ssl/certs/140642.pem (1708 bytes)
	I0604 23:17:54.054235    6196 start.go:296] duration metric: took 5.3923431s for postStartSetup
	I0604 23:17:54.056843    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:56.396798    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:17:59.156279    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:17:59.156279    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:17:59.169455    6196 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:17:59.171897    6196 start.go:128] duration metric: took 2m16.6955176s to createHost
	I0604 23:17:59.171969    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:01.474791    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:01.485080    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:01.485175    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:04.286163    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:04.297129    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:04.303171    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:18:04.303446    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:18:04.303446    6196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0604 23:18:04.427048    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717543084.434587741
	
	I0604 23:18:04.427048    6196 fix.go:216] guest clock: 1717543084.434587741
	I0604 23:18:04.427048    6196 fix.go:229] Guest: 2024-06-04 23:18:04.434587741 +0000 UTC Remote: 2024-06-04 23:17:59.1719696 +0000 UTC m=+368.406865501 (delta=5.262618141s)
	I0604 23:18:04.427048    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:06.771882    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:06.777131    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:06.777131    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:09.512387    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:09.524213    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:09.530463    6196 main.go:141] libmachine: Using SSH client type: native
	I0604 23:18:09.530650    6196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.130.221 22 <nil> <nil>}
	I0604 23:18:09.530650    6196 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1717543084
	I0604 23:18:09.673682    6196 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jun  4 23:18:04 UTC 2024
	
	I0604 23:18:09.673682    6196 fix.go:236] clock set: Tue Jun  4 23:18:04 UTC 2024
	 (err=<nil>)
	I0604 23:18:09.673682    6196 start.go:83] releasing machines lock for "multinode-022000-m02", held for 2m27.1973314s
	I0604 23:18:09.674306    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:12.073018    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:12.073018    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:12.073921    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:14.927798    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:14.927823    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:14.934065    6196 out.go:177] * Found network options:
	I0604 23:18:14.940257    6196 out.go:177]   - NO_PROXY=172.20.128.97
	W0604 23:18:14.945292    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 23:18:14.948565    6196 out.go:177]   - NO_PROXY=172.20.128.97
	W0604 23:18:14.953225    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	W0604 23:18:14.954112    6196 proxy.go:119] fail to check proxy env: Error ip not in block
	I0604 23:18:14.957143    6196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0604 23:18:14.957143    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:14.969282    6196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0604 23:18:14.969430    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:18:17.399016    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:17.399096    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:17.399164    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:17.399927    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:17.399993    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:17.400053    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:20.350627    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:20.350853    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:20.351346    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:18:20.382580    6196 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:18:20.382641    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:20.382641    6196 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:18:20.452598    6196 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0604 23:18:20.453553    6196 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4840363s)
	W0604 23:18:20.453553    6196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0604 23:18:20.466457    6196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0604 23:18:20.575992    6196 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0604 23:18:20.575992    6196 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.6188047s)
	I0604 23:18:20.576266    6196 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0604 23:18:20.576266    6196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0604 23:18:20.576266    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:18:20.576266    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:18:20.613479    6196 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0604 23:18:20.626312    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0604 23:18:20.666240    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0604 23:18:20.694545    6196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0604 23:18:20.708354    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0604 23:18:20.744349    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:18:20.780748    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0604 23:18:20.820695    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0604 23:18:20.865712    6196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0604 23:18:20.904861    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0604 23:18:20.940278    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0604 23:18:20.976219    6196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0604 23:18:21.011790    6196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0604 23:18:21.037096    6196 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0604 23:18:21.050503    6196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0604 23:18:21.089578    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:21.315850    6196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0604 23:18:21.352603    6196 start.go:494] detecting cgroup driver to use...
	I0604 23:18:21.364524    6196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0604 23:18:21.394481    6196 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0604 23:18:21.394530    6196 command_runner.go:130] > [Unit]
	I0604 23:18:21.394530    6196 command_runner.go:130] > Description=Docker Application Container Engine
	I0604 23:18:21.394613    6196 command_runner.go:130] > Documentation=https://docs.docker.com
	I0604 23:18:21.394613    6196 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0604 23:18:21.394613    6196 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0604 23:18:21.394613    6196 command_runner.go:130] > StartLimitBurst=3
	I0604 23:18:21.394689    6196 command_runner.go:130] > StartLimitIntervalSec=60
	I0604 23:18:21.394689    6196 command_runner.go:130] > [Service]
	I0604 23:18:21.394751    6196 command_runner.go:130] > Type=notify
	I0604 23:18:21.394751    6196 command_runner.go:130] > Restart=on-failure
	I0604 23:18:21.394751    6196 command_runner.go:130] > Environment=NO_PROXY=172.20.128.97
	I0604 23:18:21.394751    6196 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0604 23:18:21.394810    6196 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0604 23:18:21.394810    6196 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0604 23:18:21.394810    6196 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0604 23:18:21.394810    6196 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0604 23:18:21.394810    6196 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0604 23:18:21.394810    6196 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0604 23:18:21.394936    6196 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0604 23:18:21.394936    6196 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecStart=
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0604 23:18:21.394936    6196 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0604 23:18:21.395084    6196 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0604 23:18:21.395116    6196 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitNOFILE=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitNPROC=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > LimitCORE=infinity
	I0604 23:18:21.395116    6196 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0604 23:18:21.395116    6196 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0604 23:18:21.395116    6196 command_runner.go:130] > TasksMax=infinity
	I0604 23:18:21.395195    6196 command_runner.go:130] > TimeoutStartSec=0
	I0604 23:18:21.395195    6196 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0604 23:18:21.395195    6196 command_runner.go:130] > Delegate=yes
	I0604 23:18:21.395195    6196 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0604 23:18:21.395195    6196 command_runner.go:130] > KillMode=process
	I0604 23:18:21.395262    6196 command_runner.go:130] > [Install]
	I0604 23:18:21.395262    6196 command_runner.go:130] > WantedBy=multi-user.target
	I0604 23:18:21.407378    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:18:21.449881    6196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0604 23:18:21.492572    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0604 23:18:21.532454    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:18:21.577131    6196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0604 23:18:21.638980    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0604 23:18:21.665989    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0604 23:18:21.714593    6196 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0604 23:18:21.727109    6196 ssh_runner.go:195] Run: which cri-dockerd
	I0604 23:18:21.734685    6196 command_runner.go:130] > /usr/bin/cri-dockerd
	I0604 23:18:21.747902    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0604 23:18:21.772238    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0604 23:18:21.822253    6196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0604 23:18:22.051123    6196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0604 23:18:22.266516    6196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0604 23:18:22.266603    6196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0604 23:18:22.315702    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:22.542314    6196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0604 23:18:25.133769    6196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5913256s)
	I0604 23:18:25.147140    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0604 23:18:25.190347    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:18:25.233116    6196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0604 23:18:25.457580    6196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0604 23:18:25.681445    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:25.904637    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0604 23:18:25.956426    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0604 23:18:25.998355    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:26.229598    6196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0604 23:18:26.369965    6196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0604 23:18:26.383946    6196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0604 23:18:26.395004    6196 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0604 23:18:26.395065    6196 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0604 23:18:26.395065    6196 command_runner.go:130] > Device: 0,22	Inode: 896         Links: 1
	I0604 23:18:26.395065    6196 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0604 23:18:26.395065    6196 command_runner.go:130] > Access: 2024-06-04 23:18:26.264546037 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] > Modify: 2024-06-04 23:18:26.264546037 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] > Change: 2024-06-04 23:18:26.268546042 +0000
	I0604 23:18:26.395143    6196 command_runner.go:130] >  Birth: -
	I0604 23:18:26.395202    6196 start.go:562] Will wait 60s for crictl version
	I0604 23:18:26.410130    6196 ssh_runner.go:195] Run: which crictl
	I0604 23:18:26.417755    6196 command_runner.go:130] > /usr/bin/crictl
	I0604 23:18:26.432373    6196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0604 23:18:26.494537    6196 command_runner.go:130] > Version:  0.1.0
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeName:  docker
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeVersion:  26.1.3
	I0604 23:18:26.495379    6196 command_runner.go:130] > RuntimeApiVersion:  v1
	I0604 23:18:26.495379    6196 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0604 23:18:26.505409    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:18:26.541515    6196 command_runner.go:130] > 26.1.3
	I0604 23:18:26.552369    6196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0604 23:18:26.594387    6196 command_runner.go:130] > 26.1.3
	I0604 23:18:26.599240    6196 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0604 23:18:26.603977    6196 out.go:177]   - env NO_PROXY=172.20.128.97
	I0604 23:18:26.606107    6196 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0604 23:18:26.611209    6196 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:24:f8:85 Flags:up|broadcast|multicast|running}
	I0604 23:18:26.614235    6196 ip.go:210] interface addr: fe80::4093:d10:ab69:6c7d/64
	I0604 23:18:26.614235    6196 ip.go:210] interface addr: 172.20.128.1/20
	I0604 23:18:26.629228    6196 ssh_runner.go:195] Run: grep 172.20.128.1	host.minikube.internal$ /etc/hosts
	I0604 23:18:26.635121    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:18:26.665661    6196 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:18:26.666294    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:18:26.666980    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:18:29.076439    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:29.076439    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:29.077158    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:18:29.077937    6196 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000 for IP: 172.20.130.221
	I0604 23:18:29.077937    6196 certs.go:194] generating shared ca certs ...
	I0604 23:18:29.077937    6196 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 23:18:29.078686    6196 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0604 23:18:29.079083    6196 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0604 23:18:29.079348    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0604 23:18:29.079607    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0604 23:18:29.079830    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0604 23:18:29.080240    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0604 23:18:29.080969    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem (1338 bytes)
	W0604 23:18:29.082092    6196 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064_empty.pem, impossibly tiny 0 bytes
	I0604 23:18:29.082336    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0604 23:18:29.083276    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0604 23:18:29.083712    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0604 23:18:29.083712    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0604 23:18:29.084478    6196 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem (1708 bytes)
	I0604 23:18:29.085001    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem -> /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.085193    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.085193    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem -> /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.085833    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0604 23:18:29.142329    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0604 23:18:29.192508    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0604 23:18:29.245712    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0604 23:18:29.302632    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\140642.pem --> /usr/share/ca-certificates/140642.pem (1708 bytes)
	I0604 23:18:29.360379    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0604 23:18:29.414902    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14064.pem --> /usr/share/ca-certificates/14064.pem (1338 bytes)
	I0604 23:18:29.483600    6196 ssh_runner.go:195] Run: openssl version
	I0604 23:18:29.493599    6196 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0604 23:18:29.507152    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14064.pem && ln -fs /usr/share/ca-certificates/14064.pem /etc/ssl/certs/14064.pem"
	I0604 23:18:29.540108    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.548386    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.548386    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun  4 21:50 /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.561730    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14064.pem
	I0604 23:18:29.572904    6196 command_runner.go:130] > 51391683
	I0604 23:18:29.586256    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14064.pem /etc/ssl/certs/51391683.0"
	I0604 23:18:29.626490    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/140642.pem && ln -fs /usr/share/ca-certificates/140642.pem /etc/ssl/certs/140642.pem"
	I0604 23:18:29.662532    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.669536    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.670775    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun  4 21:50 /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.686081    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/140642.pem
	I0604 23:18:29.696479    6196 command_runner.go:130] > 3ec20f2e
	I0604 23:18:29.712433    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/140642.pem /etc/ssl/certs/3ec20f2e.0"
	I0604 23:18:29.750016    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0604 23:18:29.785201    6196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.793961    6196 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.793961    6196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun  4 21:33 /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.807916    6196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0604 23:18:29.817895    6196 command_runner.go:130] > b5213941
	I0604 23:18:29.831953    6196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0604 23:18:29.871387    6196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0604 23:18:29.878555    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:18:29.879521    6196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0604 23:18:29.879521    6196 kubeadm.go:928] updating node {m02 172.20.130.221 8443 v1.30.1 docker false true} ...
	I0604 23:18:29.879521    6196 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-022000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.130.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0604 23:18:29.892504    6196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0604 23:18:29.911673    6196 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0604 23:18:29.912682    6196 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0604 23:18:29.925801    6196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0604 23:18:29.948070    6196 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0604 23:18:29.948070    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 23:18:29.948070    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 23:18:29.964713    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0604 23:18:29.964713    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:18:29.965819    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0604 23:18:29.972550    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 23:18:29.972550    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0604 23:18:29.972550    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0604 23:18:30.017966    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 23:18:30.017966    6196 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 23:18:30.018081    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0604 23:18:30.018081    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0604 23:18:30.032378    6196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0604 23:18:30.091120    6196 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 23:18:30.091199    6196 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0604 23:18:30.091277    6196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0604 23:18:31.463231    6196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0604 23:18:31.485034    6196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0604 23:18:31.526100    6196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0604 23:18:31.589116    6196 ssh_runner.go:195] Run: grep 172.20.128.97	control-plane.minikube.internal$ /etc/hosts
	I0604 23:18:31.597123    6196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.128.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0604 23:18:31.639789    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:31.868188    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:18:31.909260    6196 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:18:31.910204    6196 start.go:316] joinCluster: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:18:31.910356    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0604 23:18:31.910356    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:18:34.299866    6196 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:18:34.300208    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:34.300273    6196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:18:37.151200    6196 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:18:37.151443    6196 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:18:37.151930    6196 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:18:37.373477    6196 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 
	I0604 23:18:37.373640    6196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.4632407s)
	I0604 23:18:37.373785    6196 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:18:37.373826    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-022000-m02"
	I0604 23:18:37.612732    6196 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] Running pre-flight checks
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0604 23:18:39.474813    6196 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0604 23:18:39.474813    6196 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001311914s
	I0604 23:18:39.475816    6196 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0604 23:18:39.475816    6196 command_runner.go:130] > This node has joined the cluster:
	I0604 23:18:39.475816    6196 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0604 23:18:39.475816    6196 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0604 23:18:39.475816    6196 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0604 23:18:39.475816    6196 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lovppt.0hva0dl3n0bmygf4 --discovery-token-ca-cert-hash sha256:0ff0b921e821f4428dbcf012dd411f291cf65527f1de8321919343295d694e15 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-022000-m02": (2.1019743s)
	I0604 23:18:39.475816    6196 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0604 23:18:39.719414    6196 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0604 23:18:39.956807    6196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-022000-m02 minikube.k8s.io/updated_at=2024_06_04T23_18_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5 minikube.k8s.io/name=multinode-022000 minikube.k8s.io/primary=false
	I0604 23:18:40.092988    6196 command_runner.go:130] > node/multinode-022000-m02 labeled
	I0604 23:18:40.095529    6196 start.go:318] duration metric: took 8.1851708s to joinCluster
	I0604 23:18:40.095670    6196 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0604 23:18:40.098632    6196 out.go:177] * Verifying Kubernetes components...
	I0604 23:18:40.095944    6196 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:18:40.115605    6196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0604 23:18:40.394248    6196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0604 23:18:40.423538    6196 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:18:40.423709    6196 kapi.go:59] client config for multinode-022000: &rest.Config{Host:"https://172.20.128.97:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-022000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x240e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0604 23:18:40.425022    6196 node_ready.go:35] waiting up to 6m0s for node "multinode-022000-m02" to be "Ready" ...
	I0604 23:18:40.425022    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:40.425022    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:40.425022    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:40.425022    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:40.440074    6196 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0604 23:18:40.440112    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:40.440150    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:40 GMT
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Audit-Id: 5100e006-4aa5-496c-9351-ca800abc3e02
	I0604 23:18:40.440150    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:40.440200    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:40.440200    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:40.440231    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:40.928672    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:40.928672    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:40.928672    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:40.928672    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:40.932487    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:40.932487    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:40.932487    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:40.932487    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:40 GMT
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Audit-Id: 06726864-d3c1-417b-bf85-631bfe75e809
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:40.933394    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:40.933437    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:40.933509    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:41.428300    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:41.428523    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:41.428523    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:41.428523    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:41.432963    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:41.432963    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:41 GMT
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Audit-Id: ae86d989-62ae-4307-9b57-9d2113e4ced5
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:41.432963    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:41.432963    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:41.432963    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:41.433795    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:41.931415    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:41.931479    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:41.931479    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:41.931479    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:41.937113    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:41.937113    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:41.937113    6196 round_trippers.go:580]     Audit-Id: 03c7fee7-a73c-4d45-9c2d-6742e4e7cd20
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:41.937464    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:41.937464    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:41.937464    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:41 GMT
	I0604 23:18:41.937646    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:42.430803    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:42.430803    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:42.430803    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:42.430803    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:42.486727    6196 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I0604 23:18:42.487522    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Audit-Id: 76bfa14b-bdac-4ff8-91c6-c8b70b936a0e
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:42.487522    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:42.487522    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:42.487607    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:42.487607    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:42.487689    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:42 GMT
	I0604 23:18:42.487901    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:42.488039    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:42.930786    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:42.930786    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:42.930786    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:42.930786    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:42.943848    6196 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0604 23:18:42.944125    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:42 GMT
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Audit-Id: bf4e37ca-52b7-4e86-8b2f-272497233ed5
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:42.944125    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:42.944125    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:42.944125    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:42.944338    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:43.434470    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:43.434682    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:43.434682    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:43.434682    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:43.439239    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:43.439462    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:43.439462    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:43 GMT
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Audit-Id: 395a0df6-fa51-4daf-908a-77466b46fd37
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:43.439462    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:43.439462    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:43.439676    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:43.937069    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:43.937365    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:43.937365    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:43.937365    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:43.941933    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:43.941933    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:43.941933    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:43.942016    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:43 GMT
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Audit-Id: 724f516f-22d8-404e-bc38-5ffb98dcded8
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:43.942016    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:43.942097    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.440293    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:44.440293    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:44.440293    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:44.440293    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:44.445992    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:44.445992    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Audit-Id: c1abadcd-08f4-4b93-9b3b-4fb221f1e69e
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:44.445992    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:44.445992    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:44.445992    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:44 GMT
	I0604 23:18:44.445992    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.928452    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:44.928452    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:44.928452    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:44.928539    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:44.932194    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:44.932383    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:44.932383    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:44 GMT
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Audit-Id: c5953675-a4ae-4a93-9ae7-52e08701c42d
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:44.932383    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:44.932383    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:44.932610    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:44.932610    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:45.433521    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:45.433588    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:45.433588    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:45.433588    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:45.437234    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:45.437915    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:45.437915    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:45.437915    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:45 GMT
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Audit-Id: f67d6c88-ca8a-4e9e-a575-64651d85c1df
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:45.437915    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:45.438123    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:45.940249    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:45.940347    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:45.940347    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:45.940379    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:45.945137    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:45.953475    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:45.953475    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:45 GMT
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Audit-Id: ca29d6e6-f2a5-4a53-ae74-f7d7b7363d79
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:45.953475    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:45.953475    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:45.953475    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.436131    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:46.436131    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:46.436131    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:46.436131    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:46.443161    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:18:46.443161    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:46.443161    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:46 GMT
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Audit-Id: 9542c593-372e-49e4-85ea-fab9b7009141
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:46.443161    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:46.443161    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:46.443161    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.928195    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:46.928298    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:46.928298    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:46.928368    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:46.932162    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:46.932162    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:46 GMT
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Audit-Id: e9c5a291-4271-44ce-8bf9-4a3c86805a39
	I0604 23:18:46.932162    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:46.932962    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:46.932962    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:46.932962    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:46.933008    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:46.933008    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:47.432705    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:47.432760    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:47.432760    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:47.432760    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:47.436553    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:47.436553    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:47.436553    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:47 GMT
	I0604 23:18:47.436892    6196 round_trippers.go:580]     Audit-Id: 024f13e1-cfbc-420c-93af-a4f8e36eb762
	I0604 23:18:47.436927    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:47.436927    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:47.436927    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:47.437041    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:47.941440    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:47.941440    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:47.941510    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:47.941510    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:47.945940    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:47.945940    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:47.945940    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:47.945940    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:47 GMT
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Audit-Id: d4b714e6-c599-4ecf-87b9-d1ebb32e512c
	I0604 23:18:47.945940    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:47.945940    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:48.428648    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:48.428815    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:48.428815    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:48.428815    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:48.436692    6196 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0604 23:18:48.436692    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:48 GMT
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Audit-Id: 3cb53ff6-a8e7-4b65-90af-df1340304e05
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:48.436692    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:48.436692    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:48.436692    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:48.436692    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:48.932583    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:48.932658    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:48.932658    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:48.932658    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.049523    6196 round_trippers.go:574] Response Status: 200 OK in 116 milliseconds
	I0604 23:18:49.050025    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.050025    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.050025    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Content-Length: 4030
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Audit-Id: 677ac292-f44c-4be8-85fd-f4826a28e42d
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.050025    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.050328    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"598","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0604 23:18:49.050609    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:49.436103    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:49.436103    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:49.436103    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:49.436103    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.442870    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:49.442870    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.442870    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.442870    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.442870    6196 round_trippers.go:580]     Audit-Id: 6769e4ff-6509-4644-8993-456713c265a1
	I0604 23:18:49.442870    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:49.929635    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:49.929635    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:49.929699    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:49.929699    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:49.934199    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:49.934199    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:49.934199    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:49.934199    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:49.934199    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:49 GMT
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Audit-Id: 8cbd9b58-8020-4cd0-90a3-66a819e361d1
	I0604 23:18:49.934413    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:49.934598    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:50.434871    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:50.434985    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:50.434985    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:50.434985    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:50.438923    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:50.438923    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:50.438923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:50.438923    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:50.438923    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:50 GMT
	I0604 23:18:50.438923    6196 round_trippers.go:580]     Audit-Id: c32efb0a-2528-486b-ae05-1ecd8093e86b
	I0604 23:18:50.439251    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:50.439251    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:50.439635    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:50.926433    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:50.926433    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:50.926433    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:50.926433    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:50.932561    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:50.932785    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:50.932785    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:50.932785    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:50 GMT
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Audit-Id: ecd6171d-f30e-46ca-b8a0-8c5e8afad934
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:50.932785    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:50.933214    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:51.431733    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:51.431733    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:51.431733    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:51.432046    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:51.435366    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:51.435366    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:51.436347    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:51 GMT
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Audit-Id: a020ffc4-e555-4f3d-82c0-4dd9ade9fd89
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:51.436373    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:51.436373    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:51.436737    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:51.437167    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:51.939777    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:51.939777    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:51.939777    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:51.939777    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:51.942952    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:51.943946    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:51.943946    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:51.943946    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:51.943946    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:51.943946    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:51 GMT
	I0604 23:18:51.944059    6196 round_trippers.go:580]     Audit-Id: 37a04768-8e8c-49b9-8408-c43a74c9f12a
	I0604 23:18:51.944059    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:51.944361    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:52.437616    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:52.437826    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:52.437826    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:52.437826    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:52.441618    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:52.441618    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:52 GMT
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Audit-Id: 9cc2fc38-7f00-488e-925a-a29cc361de72
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:52.441618    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:52.441618    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:52.441618    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:52.441618    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:52.926617    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:52.926617    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:52.926778    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:52.926778    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:52.931194    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:52.931194    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Audit-Id: eaee1025-11b3-47f9-b882-1524ea05d59c
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:52.931194    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:52.931194    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:52.931194    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:52 GMT
	I0604 23:18:52.931818    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:53.432137    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:53.432174    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:53.432235    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:53.432235    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:53.444948    6196 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 23:18:53.444948    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Audit-Id: 09e4440d-685d-48b7-a03e-45ac64e6840d
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:53.444948    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:53.444948    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:53.444948    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:53 GMT
	I0604 23:18:53.444948    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:53.445837    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:53.940177    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:53.940177    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:53.940289    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:53.940289    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:53.946847    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:53.946898    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Audit-Id: 9c20d688-f6c4-401c-b82b-02f47717309c
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:53.946898    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:53.946898    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:53.946898    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:53 GMT
	I0604 23:18:53.947075    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:54.425608    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:54.425802    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:54.425802    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:54.425802    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:54.591267    6196 round_trippers.go:574] Response Status: 200 OK in 165 milliseconds
	I0604 23:18:54.591910    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Audit-Id: 5f5ecd85-71e6-445c-bc74-839937c8c29d
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:54.591910    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:54.591910    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:54.591910    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:54 GMT
	I0604 23:18:54.592190    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:54.938097    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:54.938339    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:54.938339    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:54.938339    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:54.942145    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:54.943103    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Audit-Id: 0fbc75d3-3b56-4b05-ac9e-092fb80fd764
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:54.943144    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:54.943144    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:54.943144    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:54 GMT
	I0604 23:18:54.943439    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.440130    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:55.440257    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:55.440325    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:55.440325    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:55.443789    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:55.443789    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:55.443789    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:55 GMT
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Audit-Id: b819d914-46b5-4997-b57a-b416699df946
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:55.444795    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:55.444795    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:55.444836    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:55.445214    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.940259    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:55.940489    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:55.940489    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:55.940579    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:55.943889    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:55.949610    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Audit-Id: 3c5f496c-89a0-4eac-b8fe-6a7658d50b11
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:55.949610    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:55.949610    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:55.949610    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:55 GMT
	I0604 23:18:55.950651    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"611","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0604 23:18:55.950651    6196 node_ready.go:53] node "multinode-022000-m02" has status "Ready":"False"
	I0604 23:18:56.440682    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:56.440682    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.440745    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.440745    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.453178    6196 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0604 23:18:56.453178    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.453178    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Audit-Id: d0e195d2-de75-4e31-87da-b96f29b0ce2e
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.453178    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.453178    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.453178    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"627","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0604 23:18:56.454177    6196 node_ready.go:49] node "multinode-022000-m02" has status "Ready":"True"
	I0604 23:18:56.454177    6196 node_ready.go:38] duration metric: took 16.0290284s for node "multinode-022000-m02" to be "Ready" ...
	I0604 23:18:56.454177    6196 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:18:56.454177    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods
	I0604 23:18:56.454177    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.454177    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.454177    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.460177    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:56.460177    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.460177    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.460177    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Audit-Id: f1ff896b-4076-4c99-8363-a0f085b11b3d
	I0604 23:18:56.460177    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.461109    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.462565    6196 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"629"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70438 chars]
	I0604 23:18:56.466327    6196 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.467036    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mlh9s
	I0604 23:18:56.467036    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.467036    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.467036    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.471615    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:56.471758    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.471758    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.471758    6196 round_trippers.go:580]     Audit-Id: 2603eac6-f83f-406e-b52e-8eb2d57db2ef
	I0604 23:18:56.471850    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.471850    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.471850    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.471850    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.472040    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-mlh9s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"15497b54-7964-47a8-9dc8-89c225f6b842","resourceVersion":"421","creationTimestamp":"2024-06-04T23:15:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"35e6f047-84cd-4ebd-aa42-f4810a209d30","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35e6f047-84cd-4ebd-aa42-f4810a209d30\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0604 23:18:56.472173    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.472708    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.472708    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.472708    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.479049    6196 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0604 23:18:56.479049    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.479049    6196 round_trippers.go:580]     Audit-Id: 13f4aafc-3e98-4990-bcf1-bfab4a0a1cfc
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.479212    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.479212    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.479212    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.479409    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.479952    6196 pod_ready.go:92] pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.480014    6196 pod_ready.go:81] duration metric: took 13.1118ms for pod "coredns-7db6d8ff4d-mlh9s" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.480014    6196 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.480137    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-022000
	I0604 23:18:56.480137    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.480193    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.480193    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.482841    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.482841    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.482841    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.482841    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Audit-Id: 8bc91cd3-1781-485e-b29f-921562230dcc
	I0604 23:18:56.482841    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.483501    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022000","namespace":"kube-system","uid":"cf5ce7db-ab12-4be8-9e44-317caab1adeb","resourceVersion":"386","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.128.97:2379","kubernetes.io/config.hash":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.mirror":"062055fff54be1dfa52344fae14a29a3","kubernetes.io/config.seen":"2024-06-04T23:15:11.311330236Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0604 23:18:56.483501    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.483501    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.483501    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.483501    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.486497    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.486497    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.486497    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.486497    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Audit-Id: 90a9a8f2-4794-4ccb-a03d-4aeabe98e4a4
	I0604 23:18:56.486497    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.486497    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.487484    6196 pod_ready.go:92] pod "etcd-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.487484    6196 pod_ready.go:81] duration metric: took 7.4703ms for pod "etcd-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.487484    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.487484    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022000
	I0604 23:18:56.487484    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.487484    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.487484    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.490594    6196 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0604 23:18:56.490620    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.490620    6196 round_trippers.go:580]     Audit-Id: 8122a3e9-9a8b-4601-80ac-4dab24708f75
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.490701    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.490701    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.490701    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.491192    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022000","namespace":"kube-system","uid":"a15ca283-cf36-4ce5-846a-37257524e217","resourceVersion":"385","creationTimestamp":"2024-06-04T23:15:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.128.97:8443","kubernetes.io/config.hash":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.mirror":"9ba2e7a4236a9c9b06cf265710457805","kubernetes.io/config.seen":"2024-06-04T23:15:02.371587958Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0604 23:18:56.491497    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.491497    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.491497    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.491497    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.499516    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:18:56.499977    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.499977    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.499977    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.499977    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.500088    6196 round_trippers.go:580]     Audit-Id: 07252805-fec1-4049-b6fc-6f0779e2753b
	I0604 23:18:56.500286    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.500286    6196 pod_ready.go:92] pod "kube-apiserver-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.500286    6196 pod_ready.go:81] duration metric: took 12.8015ms for pod "kube-apiserver-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.500286    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.500286    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022000
	I0604 23:18:56.500286    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.500286    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.500286    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.516906    6196 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0604 23:18:56.516906    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Audit-Id: 3729ee0a-14f8-4804-85ed-c0b86bf10d5a
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.516906    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.516906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.516906    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.516906    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022000","namespace":"kube-system","uid":"2bb46405-19fa-4ca8-afd5-6d6224271444","resourceVersion":"382","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.mirror":"84c4d645ecc2a919c3d46a8ee859a4e7","kubernetes.io/config.seen":"2024-06-04T23:15:11.311327436Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0604 23:18:56.516906    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.516906    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.516906    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.516906    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.524925    6196 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0604 23:18:56.524925    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.524925    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Audit-Id: 6c9ec9d2-1dea-4346-984f-1ab0d7ab3638
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.524925    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.524925    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.525726    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.526327    6196 pod_ready.go:92] pod "kube-controller-manager-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.526327    6196 pod_ready.go:81] duration metric: took 26.0408ms for pod "kube-controller-manager-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.526403    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.650008    6196 request.go:629] Waited for 123.5448ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:18:56.650008    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pbmpr
	I0604 23:18:56.650008    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.650008    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.650008    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.653364    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:56.653364    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Audit-Id: c3c0e957-6964-48f9-b5f3-004c994db1ad
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.653364    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.653364    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.653364    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.653887    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.653995    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pbmpr","generateName":"kube-proxy-","namespace":"kube-system","uid":"ab42abeb-7ba9-4571-8c49-7c7f1e4bb6be","resourceVersion":"378","creationTimestamp":"2024-06-04T23:15:24Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0604 23:18:56.852344    6196 request.go:629] Waited for 198.3476ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.852667    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:56.852726    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:56.852726    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:56.852726    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:56.856115    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:56.856115    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Audit-Id: 232800f6-927c-4c12-8811-7fa2efc7c85d
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:56.856115    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:56.856115    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:56.856115    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:56 GMT
	I0604 23:18:56.857493    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:56.858252    6196 pod_ready.go:92] pod "kube-proxy-pbmpr" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:56.858252    6196 pod_ready.go:81] duration metric: took 331.8467ms for pod "kube-proxy-pbmpr" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:56.858252    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xb6b5" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.040806    6196 request.go:629] Waited for 182.5528ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xb6b5
	I0604 23:18:57.040940    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xb6b5
	I0604 23:18:57.041192    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.041192    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.041192    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.045371    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.045371    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Audit-Id: ff909036-4d06-4c7b-bf1d-0cbe07a4c5c8
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.045371    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.045445    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.045445    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.045445    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.045618    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xb6b5","generateName":"kube-proxy-","namespace":"kube-system","uid":"32c32f53-0cf7-4236-a187-8975de272f62","resourceVersion":"615","creationTimestamp":"2024-06-04T23:18:39Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65a2d176-a8e8-492d-972b-d687ffc57c3d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65a2d176-a8e8-492d-972b-d687ffc57c3d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0604 23:18:57.243192    6196 request.go:629] Waited for 196.427ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:57.243315    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000-m02
	I0604 23:18:57.243315    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.243315    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.243315    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.246909    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.246909    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.246909    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.246909    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.247838    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Audit-Id: a85eee93-bdf0-4e91-840c-5aa626488f54
	I0604 23:18:57.247838    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.248120    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000-m02","uid":"ed02d073-d60e-474d-b9da-ed98f708eeee","resourceVersion":"627","creationTimestamp":"2024-06-04T23:18:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_04T23_18_39_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:18:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3264 chars]
	I0604 23:18:57.248308    6196 pod_ready.go:92] pod "kube-proxy-xb6b5" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:57.248308    6196 pod_ready.go:81] duration metric: took 390.0526ms for pod "kube-proxy-xb6b5" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.248308    6196 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.446621    6196 request.go:629] Waited for 198.1022ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:18:57.446729    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022000
	I0604 23:18:57.446729    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.446913    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.446995    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.451599    6196 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0604 23:18:57.451810    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.451810    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.451810    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Audit-Id: 91aa8dc1-280a-4c34-b7c1-43a6dd9aed33
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.451810    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.452219    6196 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022000","namespace":"kube-system","uid":"0453fac4-fec2-4a1f-80f7-c3192dae4ea5","resourceVersion":"384","creationTimestamp":"2024-06-04T23:15:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.mirror":"7091343253039c34aad74dccf8d697b0","kubernetes.io/config.seen":"2024-06-04T23:15:11.311328836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-04T23:15:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0604 23:18:57.648851    6196 request.go:629] Waited for 195.8094ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:57.649151    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes/multinode-022000
	I0604 23:18:57.649151    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.649151    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.649151    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.652538    6196 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0604 23:18:57.652538    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.652538    6196 round_trippers.go:580]     Audit-Id: 889c65f7-0502-4d93-92ab-ac9c02921fda
	I0604 23:18:57.652538    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.653187    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.653187    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.653187    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.653187    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.653379    6196 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-04T23:15:07Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0604 23:18:57.654035    6196 pod_ready.go:92] pod "kube-scheduler-multinode-022000" in "kube-system" namespace has status "Ready":"True"
	I0604 23:18:57.654035    6196 pod_ready.go:81] duration metric: took 405.7242ms for pod "kube-scheduler-multinode-022000" in "kube-system" namespace to be "Ready" ...
	I0604 23:18:57.654188    6196 pod_ready.go:38] duration metric: took 1.200001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0604 23:18:57.654275    6196 system_svc.go:44] waiting for kubelet service to be running ....
	I0604 23:18:57.666933    6196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:18:57.694284    6196 system_svc.go:56] duration metric: took 40.009ms WaitForService to wait for kubelet
	I0604 23:18:57.694284    6196 kubeadm.go:576] duration metric: took 17.5984755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:18:57.694284    6196 node_conditions.go:102] verifying NodePressure condition ...
	I0604 23:18:57.850811    6196 request.go:629] Waited for 156.1825ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.128.97:8443/api/v1/nodes
	I0604 23:18:57.850811    6196 round_trippers.go:463] GET https://172.20.128.97:8443/api/v1/nodes
	I0604 23:18:57.850811    6196 round_trippers.go:469] Request Headers:
	I0604 23:18:57.850811    6196 round_trippers.go:473]     Accept: application/json, */*
	I0604 23:18:57.850811    6196 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0604 23:18:57.855922    6196 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0604 23:18:57.855922    6196 round_trippers.go:577] Response Headers:
	I0604 23:18:57.855922    6196 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cbb8964e-7f83-468c-b695-869c65382535
	I0604 23:18:57.855922    6196 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ad277f33-9e7c-4fe7-8b4b-e73eb37d6d2b
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Date: Tue, 04 Jun 2024 23:18:57 GMT
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Audit-Id: 751e6b09-6d7a-4d60-a391-71a8e93b1249
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Cache-Control: no-cache, private
	I0604 23:18:57.855922    6196 round_trippers.go:580]     Content-Type: application/json
	I0604 23:18:57.855922    6196 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"631"},"items":[{"metadata":{"name":"multinode-022000","uid":"e6c7d6d5-31ef-4aae-82b1-0d1130b29243","resourceVersion":"427","creationTimestamp":"2024-06-04T23:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"901ac483c3e1097c63cda7493d918b612a8127f5","minikube.k8s.io/name":"multinode-022000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_04T23_15_12_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9268 chars]
	I0604 23:18:57.857152    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:18:57.857152    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:18:57.857228    6196 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0604 23:18:57.857228    6196 node_conditions.go:123] node cpu capacity is 2
	I0604 23:18:57.857228    6196 node_conditions.go:105] duration metric: took 162.9424ms to run NodePressure ...
	I0604 23:18:57.857228    6196 start.go:240] waiting for startup goroutines ...
	I0604 23:18:57.857228    6196 start.go:254] writing updated cluster config ...
	I0604 23:18:57.869579    6196 ssh_runner.go:195] Run: rm -f paused
	I0604 23:18:58.022987    6196 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0604 23:18:58.028415    6196 out.go:177] * Done! kubectl is now configured to use "multinode-022000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.842369051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.843831161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.843933861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.844216663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.965070683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.965906588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.966104390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:15:39 multinode-022000 dockerd[1336]: time="2024-06-04T23:15:39.966757594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.937682296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938159801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938237701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:25 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:25.938740806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:26 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:19:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0954c9343f31c4946bfd429a1ad215a82da95e5ae2afdb88166571b5af0adf05/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 04 23:19:27 multinode-022000 cri-dockerd[1236]: time="2024-06-04T23:19:27Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.797212457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798264368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798390669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:19:27 multinode-022000 dockerd[1336]: time="2024-06-04T23:19:27.798930874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 04 23:20:18 multinode-022000 dockerd[1330]: 2024/06/04 23:20:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Jun 04 23:20:19 multinode-022000 dockerd[1330]: 2024/06/04 23:20:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	411a3919d8cac       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   0954c9343f31c       busybox-fc5497c4f-8bcjx
	03f3b4de24580       cbb01a7bd410d                                                                                         21 minutes ago      Running             coredns                   0                   675bb5a4c04a1       coredns-7db6d8ff4d-mlh9s
	2dba3a07a5a2f       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   379b62cc1d5a7       storage-provisioner
	3df3de0da4c3c       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              21 minutes ago      Running             kindnet-cni               0                   4b233b76fa4c7       kindnet-s279j
	e160006e01953       747097150317f                                                                                         22 minutes ago      Running             kube-proxy                0                   1a26f50a38b56       kube-proxy-pbmpr
	e7a691c4c711b       a52dc94f0a912                                                                                         22 minutes ago      Running             kube-scheduler            0                   09d03e1ab4e31       kube-scheduler-multinode-022000
	6fa66b9502ad4       25a1387cdab82                                                                                         22 minutes ago      Running             kube-controller-manager   0                   3d4cf95b0d999       kube-controller-manager-multinode-022000
	05c914a510d03       91be940803172                                                                                         22 minutes ago      Running             kube-apiserver            0                   3f588a7c5b099       kube-apiserver-multinode-022000
	8b3adda489455       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   51f3d3843b646       etcd-multinode-022000
	
	
	==> coredns [03f3b4de2458] <==
	[INFO] 10.244.0.3:35762 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000244603s
	[INFO] 10.244.1.2:41972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131101s
	[INFO] 10.244.1.2:44208 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000138602s
	[INFO] 10.244.1.2:41832 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278203s
	[INFO] 10.244.1.2:41631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165302s
	[INFO] 10.244.1.2:59710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069001s
	[INFO] 10.244.1.2:43987 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000622s
	[INFO] 10.244.1.2:49013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077701s
	[INFO] 10.244.1.2:60219 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173902s
	[INFO] 10.244.0.3:35268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186701s
	[INFO] 10.244.0.3:45568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000608s
	[INFO] 10.244.0.3:41299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113002s
	[INFO] 10.244.0.3:59664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224903s
	[INFO] 10.244.1.2:59289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105601s
	[INFO] 10.244.1.2:38478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110401s
	[INFO] 10.244.1.2:39370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000214302s
	[INFO] 10.244.1.2:39440 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056401s
	[INFO] 10.244.0.3:38326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260803s
	[INFO] 10.244.0.3:37752 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227802s
	[INFO] 10.244.0.3:59155 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166002s
	[INFO] 10.244.0.3:34407 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186702s
	[INFO] 10.244.1.2:40850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151301s
	[INFO] 10.244.1.2:55108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114201s
	[INFO] 10.244.1.2:44936 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000624s
	[INFO] 10.244.1.2:51542 - 5 "PTR IN 1.128.20.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095901s
	
	
	==> describe nodes <==
	Name:               multinode-022000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=multinode-022000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_04T23_15_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 23:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 23:37:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 23:35:05 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 23:35:05 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 23:35:05 +0000   Tue, 04 Jun 2024 23:15:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 23:35:05 +0000   Tue, 04 Jun 2024 23:15:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.128.97
	  Hostname:    multinode-022000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d6c5809fb3440069dd9b4ef8addbc3e
	  System UUID:                4c5c03cf-a4e2-8c42-8f91-37d86e19cfc3
	  Boot ID:                    edaf61b4-2d1c-46eb-84d1-21d1359cb7e1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8bcjx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-mlh9s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-multinode-022000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-s279j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-022000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-multinode-022000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-pbmpr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-022000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node multinode-022000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node multinode-022000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node multinode-022000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m   node-controller  Node multinode-022000 event: Registered Node multinode-022000 in Controller
	  Normal  NodeReady                21m   kubelet          Node multinode-022000 status is now: NodeReady
	
	
	Name:               multinode-022000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=multinode-022000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T23_18_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 23:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 23:37:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 04 Jun 2024 23:34:57 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 04 Jun 2024 23:34:57 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 04 Jun 2024 23:34:57 +0000   Tue, 04 Jun 2024 23:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 04 Jun 2024 23:34:57 +0000   Tue, 04 Jun 2024 23:18:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.130.221
	  Hostname:    multinode-022000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd6919e87c8e4561b186751987589023
	  System UUID:                2246aa18-3838-a94d-a4f8-a3805e5cd9b5
	  Boot ID:                    58a26601-1670-47eb-a478-fb94fa292d33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cbgjv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-4rf65              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-xb6b5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)  kubelet          Node multinode-022000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)  kubelet          Node multinode-022000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)  kubelet          Node multinode-022000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node multinode-022000-m02 event: Registered Node multinode-022000-m02 in Controller
	  Normal  NodeReady                18m                kubelet          Node multinode-022000-m02 status is now: NodeReady
	
	
	Name:               multinode-022000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=901ac483c3e1097c63cda7493d918b612a8127f5
	                    minikube.k8s.io/name=multinode-022000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_04T23_23_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 04 Jun 2024 23:23:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 04 Jun 2024 23:31:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 04 Jun 2024 23:29:19 +0000   Tue, 04 Jun 2024 23:32:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 04 Jun 2024 23:29:19 +0000   Tue, 04 Jun 2024 23:32:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 04 Jun 2024 23:29:19 +0000   Tue, 04 Jun 2024 23:32:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 04 Jun 2024 23:29:19 +0000   Tue, 04 Jun 2024 23:32:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.139.161
	  Hostname:    multinode-022000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a79523e655824f52adb01d944e5bcb50
	  System UUID:                842fe8df-cadd-ef42-b579-900ac1fe02b9
	  Boot ID:                    6a990f9c-3cab-4876-a680-70cd5e81c081
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-l64hh       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-xw7kz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node multinode-022000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node multinode-022000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node multinode-022000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node multinode-022000-m03 event: Registered Node multinode-022000-m03 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-022000-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m17s              node-controller  Node multinode-022000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +53.334138] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.186580] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Jun 4 23:14] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.111896] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.649522] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.231682] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.248826] systemd-fstab-generator[1019]: Ignoring "noauto" option for root device
	[  +2.853059] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.210042] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.219250] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.305198] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[ +11.780357] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.115595] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.283147] systemd-fstab-generator[1520]: Ignoring "noauto" option for root device
	[Jun 4 23:15] systemd-fstab-generator[1729]: Ignoring "noauto" option for root device
	[  +0.103471] kauditd_printk_skb: 73 callbacks suppressed
	[  +9.589942] systemd-fstab-generator[2139]: Ignoring "noauto" option for root device
	[  +0.153452] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.869630] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +0.187226] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.014750] kauditd_printk_skb: 51 callbacks suppressed
	[Jun 4 23:19] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.073718] hrtimer: interrupt took 2176522 ns
	
	
	==> etcd [8b3adda48945] <==
	{"level":"warn","ts":"2024-06-04T23:18:54.590637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.75205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-022000-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-04T23:18:54.591449Z","caller":"traceutil/trace.go:171","msg":"trace[1307728064] range","detail":"{range_begin:/registry/minions/multinode-022000-m02; range_end:; response_count:1; response_revision:621; }","duration":"159.601558ms","start":"2024-06-04T23:18:54.431834Z","end":"2024-06-04T23:18:54.591435Z","steps":["trace[1307728064] 'range keys from in-memory index tree'  (duration: 157.181734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:18:54.591102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.743811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-06-04T23:18:54.592082Z","caller":"traceutil/trace.go:171","msg":"trace[1723561822] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:621; }","duration":"114.78262ms","start":"2024-06-04T23:18:54.477285Z","end":"2024-06-04T23:18:54.592068Z","steps":["trace[1723561822] 'range keys from in-memory index tree'  (duration: 113.65311ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:20:53.835372Z","caller":"traceutil/trace.go:171","msg":"trace[179164666] transaction","detail":"{read_only:false; response_revision:772; number_of_response:1; }","duration":"132.148042ms","start":"2024-06-04T23:20:53.703205Z","end":"2024-06-04T23:20:53.835353Z","steps":["trace[179164666] 'process raft request'  (duration: 132.042141ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:23:35.939714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.525274ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7747475474990895759 > lease_revoke:<id:6b848fe588e2124f>","response":"size:28"}
	{"level":"info","ts":"2024-06-04T23:23:35.940219Z","caller":"traceutil/trace.go:171","msg":"trace[2029253542] linearizableReadLoop","detail":"{readStateIndex:1030; appliedIndex:1028; }","duration":"272.591224ms","start":"2024-06-04T23:23:35.667614Z","end":"2024-06-04T23:23:35.940205Z","steps":["trace[2029253542] 'read index received'  (duration: 169.207243ms)","trace[2029253542] 'applied index is now lower than readState.Index'  (duration: 103.383181ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-04T23:23:35.940386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.727225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-06-04T23:23:35.940419Z","caller":"traceutil/trace.go:171","msg":"trace[619316574] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:916; }","duration":"272.871926ms","start":"2024-06-04T23:23:35.667539Z","end":"2024-06-04T23:23:35.940411Z","steps":["trace[619316574] 'agreement among raft nodes before linearized reading'  (duration: 272.712125ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:23:35.940833Z","caller":"traceutil/trace.go:171","msg":"trace[165487726] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"274.534141ms","start":"2024-06-04T23:23:35.666286Z","end":"2024-06-04T23:23:35.94082Z","steps":["trace[165487726] 'process raft request'  (duration: 273.562632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:23:54.382499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.923955ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-04T23:23:54.382562Z","caller":"traceutil/trace.go:171","msg":"trace[785130424] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:970; }","duration":"184.025556ms","start":"2024-06-04T23:23:54.198523Z","end":"2024-06-04T23:23:54.382549Z","steps":["trace[785130424] 'count revisions from in-memory index tree'  (duration: 183.788154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-04T23:23:54.936215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.552946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-022000-m03\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-06-04T23:23:54.9363Z","caller":"traceutil/trace.go:171","msg":"trace[1706072213] range","detail":"{range_begin:/registry/minions/multinode-022000-m03; range_end:; response_count:1; response_revision:971; }","duration":"135.679448ms","start":"2024-06-04T23:23:54.800606Z","end":"2024-06-04T23:23:54.936285Z","steps":["trace[1706072213] 'range keys from in-memory index tree'  (duration: 135.427645ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:25:05.556845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":729}
	{"level":"info","ts":"2024-06-04T23:25:05.580001Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":729,"took":"22.784588ms","hash":1106632076,"current-db-size-bytes":2506752,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2506752,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-06-04T23:25:05.580041Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1106632076,"revision":729,"compact-revision":-1}
	{"level":"info","ts":"2024-06-04T23:30:05.579556Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1059}
	{"level":"info","ts":"2024-06-04T23:30:05.59129Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1059,"took":"11.082286ms","hash":4288114892,"current-db-size-bytes":2506752,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-06-04T23:30:05.591406Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4288114892,"revision":1059,"compact-revision":729}
	{"level":"info","ts":"2024-06-04T23:31:50.844698Z","caller":"traceutil/trace.go:171","msg":"trace[1006371749] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"254.060952ms","start":"2024-06-04T23:31:50.590615Z","end":"2024-06-04T23:31:50.844676Z","steps":["trace[1006371749] 'process raft request'  (duration: 227.653749ms)","trace[1006371749] 'compare'  (duration: 26.233402ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-04T23:31:52.954584Z","caller":"traceutil/trace.go:171","msg":"trace[1328533108] transaction","detail":"{read_only:false; response_revision:1464; number_of_response:1; }","duration":"149.69605ms","start":"2024-06-04T23:31:52.804868Z","end":"2024-06-04T23:31:52.954564Z","steps":["trace[1328533108] 'process raft request'  (duration: 149.466548ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-04T23:35:05.598727Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1359}
	{"level":"info","ts":"2024-06-04T23:35:05.607122Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1359,"took":"7.549057ms","hash":1463243870,"current-db-size-bytes":2506752,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-06-04T23:35:05.607191Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1463243870,"revision":1359,"compact-revision":1059}
	
	
	==> kernel <==
	 23:37:31 up 24 min,  0 users,  load average: 0.30, 0.31, 0.26
	Linux multinode-022000 5.10.207 #1 SMP Tue Jun 4 20:09:42 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3df3de0da4c3] <==
	I0604 23:36:45.557751       1 main.go:250] Node multinode-022000-m03 has CIDR [10.244.2.0/24] 
	I0604 23:36:55.566005       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:36:55.566521       1 main.go:227] handling current node
	I0604 23:36:55.566815       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:36:55.566928       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:36:55.567401       1 main.go:223] Handling node with IPs: map[172.20.139.161:{}]
	I0604 23:36:55.567512       1 main.go:250] Node multinode-022000-m03 has CIDR [10.244.2.0/24] 
	I0604 23:37:05.575263       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:37:05.575310       1 main.go:227] handling current node
	I0604 23:37:05.575323       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:37:05.575330       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:37:05.575870       1 main.go:223] Handling node with IPs: map[172.20.139.161:{}]
	I0604 23:37:05.575908       1 main.go:250] Node multinode-022000-m03 has CIDR [10.244.2.0/24] 
	I0604 23:37:15.588771       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:37:15.588862       1 main.go:227] handling current node
	I0604 23:37:15.588878       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:37:15.588903       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:37:15.589313       1 main.go:223] Handling node with IPs: map[172.20.139.161:{}]
	I0604 23:37:15.589433       1 main.go:250] Node multinode-022000-m03 has CIDR [10.244.2.0/24] 
	I0604 23:37:25.598785       1 main.go:223] Handling node with IPs: map[172.20.128.97:{}]
	I0604 23:37:25.598842       1 main.go:227] handling current node
	I0604 23:37:25.598857       1 main.go:223] Handling node with IPs: map[172.20.130.221:{}]
	I0604 23:37:25.598863       1 main.go:250] Node multinode-022000-m02 has CIDR [10.244.1.0/24] 
	I0604 23:37:25.599091       1 main.go:223] Handling node with IPs: map[172.20.139.161:{}]
	I0604 23:37:25.599106       1 main.go:250] Node multinode-022000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [05c914a510d0] <==
	I0604 23:15:08.713518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0604 23:15:08.722789       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0604 23:15:08.722829       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0604 23:15:09.947302       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0604 23:15:10.035350       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0604 23:15:10.232813       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0604 23:15:10.262496       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.128.97]
	I0604 23:15:10.264235       1 controller.go:615] quota admission added evaluator for: endpoints
	I0604 23:15:10.273648       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0604 23:15:10.773786       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0604 23:15:11.242066       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0604 23:15:11.305800       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0604 23:15:11.355040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0604 23:15:24.649565       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0604 23:15:24.729903       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0604 23:19:31.319940       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64510: use of closed network connection
	E0604 23:19:31.895448       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64513: use of closed network connection
	E0604 23:19:32.540474       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64515: use of closed network connection
	E0604 23:19:33.125452       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64517: use of closed network connection
	E0604 23:19:33.672858       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64519: use of closed network connection
	E0604 23:19:34.212422       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64521: use of closed network connection
	E0604 23:19:35.192885       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64524: use of closed network connection
	E0604 23:19:45.762907       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64526: use of closed network connection
	E0604 23:19:46.298718       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64529: use of closed network connection
	E0604 23:19:56.829563       1 conn.go:339] Error on socket receive: read tcp 172.20.128.97:8443->172.20.128.1:64531: use of closed network connection
	
	
	==> kube-controller-manager [6fa66b9502ad] <==
	I0604 23:15:25.922585       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.765637ms"
	I0604 23:15:25.950785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.111929ms"
	I0604 23:15:25.950913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="89µs"
	I0604 23:15:38.684348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="104.6µs"
	I0604 23:15:38.721437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="107.901µs"
	I0604 23:15:38.979188       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0604 23:15:40.789513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.879262ms"
	I0604 23:15:40.790881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.803µs"
	I0604 23:18:38.991339       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-022000-m02\" does not exist"
	I0604 23:18:39.010747       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-022000-m02" podCIDRs=["10.244.1.0/24"]
	I0604 23:18:39.013790       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-022000-m02"
	I0604 23:18:56.388011       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-022000-m02"
	I0604 23:19:25.341611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.243329ms"
	I0604 23:19:25.403275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.596173ms"
	I0604 23:19:25.403620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="270.603µs"
	I0604 23:19:28.246774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.136603ms"
	I0604 23:19:28.247075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.8µs"
	I0604 23:19:28.687177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.475206ms"
	I0604 23:19:28.687294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.001µs"
	I0604 23:23:43.152651       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-022000-m02"
	I0604 23:23:43.155609       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-022000-m03\" does not exist"
	I0604 23:23:43.171059       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-022000-m03" podCIDRs=["10.244.2.0/24"]
	I0604 23:23:44.104033       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-022000-m03"
	I0604 23:24:06.121304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-022000-m02"
	I0604 23:32:14.249686       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-022000-m02"
	
	
	==> kube-proxy [e160006e0195] <==
	I0604 23:15:26.334501       1 server_linux.go:69] "Using iptables proxy"
	I0604 23:15:26.351604       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.20.128.97"]
	I0604 23:15:26.408095       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0604 23:15:26.408614       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0604 23:15:26.408639       1 server_linux.go:165] "Using iptables Proxier"
	I0604 23:15:26.416392       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0604 23:15:26.417197       1 server.go:872] "Version info" version="v1.30.1"
	I0604 23:15:26.417299       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0604 23:15:26.419396       1 config.go:192] "Starting service config controller"
	I0604 23:15:26.419605       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0604 23:15:26.419645       1 config.go:101] "Starting endpoint slice config controller"
	I0604 23:15:26.420505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0604 23:15:26.421709       1 config.go:319] "Starting node config controller"
	I0604 23:15:26.421746       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0604 23:15:26.519983       1 shared_informer.go:320] Caches are synced for service config
	I0604 23:15:26.521427       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0604 23:15:26.522325       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e7a691c4c711] <==
	W0604 23:15:08.993825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:08.993898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.019374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0604 23:15:09.019687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0604 23:15:09.043741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0604 23:15:09.043806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0604 23:15:09.096122       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0604 23:15:09.096183       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0604 23:15:09.109897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0604 23:15:09.110353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0604 23:15:09.116456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:09.116572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.140631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0604 23:15:09.140863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0604 23:15:09.158729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0604 23:15:09.158901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0604 23:15:09.159080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0604 23:15:09.159123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0604 23:15:09.254117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0604 23:15:09.254175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0604 23:15:09.267345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0604 23:15:09.267593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0604 23:15:09.294580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0604 23:15:09.294643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0604 23:15:12.023143       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 04 23:33:11 multinode-022000 kubelet[2146]: E0604 23:33:11.447754    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:33:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:33:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:33:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:33:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:34:11 multinode-022000 kubelet[2146]: E0604 23:34:11.446055    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:34:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:34:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:34:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:34:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:35:11 multinode-022000 kubelet[2146]: E0604 23:35:11.446473    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:35:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:35:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:35:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:35:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:36:11 multinode-022000 kubelet[2146]: E0604 23:36:11.446073    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:36:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:36:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:36:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:36:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 04 23:37:11 multinode-022000 kubelet[2146]: E0604 23:37:11.447043    2146 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 04 23:37:11 multinode-022000 kubelet[2146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 04 23:37:11 multinode-022000 kubelet[2146]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 04 23:37:11 multinode-022000 kubelet[2146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 04 23:37:11 multinode-022000 kubelet[2146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:37:22.735861   11200 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-022000 -n multinode-022000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-022000 -n multinode-022000: (12.7749536s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-022000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (295.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (257.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-022000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-022000
E0604 23:38:17.023320   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-022000: (2m28.1133897s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-022000 --wait=true -v=8 --alsologtostderr
E0604 23:41:45.680774   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-022000 --wait=true -v=8 --alsologtostderr: exit status 1 (1m36.0142824s)

                                                
                                                
-- stdout --
	* [multinode-022000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	* Restarting existing hyperv VM for "multinode-022000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:40:14.900281    4632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 23:40:14.982102    4632 out.go:291] Setting OutFile to fd 1228 ...
	I0604 23:40:14.982993    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:40:14.982993    4632 out.go:304] Setting ErrFile to fd 532...
	I0604 23:40:14.982993    4632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:40:15.009275    4632 out.go:298] Setting JSON to false
	I0604 23:40:15.013877    4632 start.go:129] hostinfo: {"hostname":"minikube6","uptime":91664,"bootTime":1717452750,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 23:40:15.013877    4632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 23:40:15.096177    4632 out.go:177] * [multinode-022000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 23:40:15.166787    4632 notify.go:220] Checking for updates...
	I0604 23:40:15.198457    4632 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 23:40:15.237822    4632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 23:40:15.292112    4632 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 23:40:15.355503    4632 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 23:40:15.385046    4632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 23:40:15.395296    4632 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:40:15.395962    4632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 23:40:21.638009    4632 out.go:177] * Using the hyperv driver based on existing profile
	I0604 23:40:21.655944    4632 start.go:297] selected driver: hyperv
	I0604 23:40:21.655944    4632 start.go:901] validating driver "hyperv" against &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.128.16 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:40:21.656188    4632 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0604 23:40:21.725933    4632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0604 23:40:21.725933    4632 cni.go:84] Creating CNI manager for ""
	I0604 23:40:21.725933    4632 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0604 23:40:21.725933    4632 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-022000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.128.97 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.130.221 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.128.16 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 23:40:21.727134    4632 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 23:40:21.740494    4632 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0604 23:40:21.794441    4632 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 23:40:21.794898    4632 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 23:40:21.794898    4632 cache.go:56] Caching tarball of preloaded images
	I0604 23:40:21.795825    4632 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0604 23:40:21.795825    4632 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0604 23:40:21.796365    4632 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:40:21.799014    4632 start.go:360] acquireMachinesLock for multinode-022000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0604 23:40:21.799014    4632 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-022000"
	I0604 23:40:21.799014    4632 start.go:96] Skipping create...Using existing machine configuration
	I0604 23:40:21.799014    4632 fix.go:54] fixHost starting: 
	I0604 23:40:21.800174    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:24.958485    4632 main.go:141] libmachine: [stdout =====>] : Off
	
	I0604 23:40:24.958597    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:24.958684    4632 fix.go:112] recreateIfNeeded on multinode-022000: state=Stopped err=<nil>
	W0604 23:40:24.958755    4632 fix.go:138] unexpected machine state, will restart: <nil>
	I0604 23:40:24.966509    4632 out.go:177] * Restarting existing hyperv VM for "multinode-022000" ...
	I0604 23:40:24.968445    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-022000
	I0604 23:40:28.389872    4632 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:40:28.389872    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:28.389872    4632 main.go:141] libmachine: Waiting for host to start...
	I0604 23:40:28.390089    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:30.904727    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:40:30.904727    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:30.905560    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:40:33.731036    4632 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:40:33.731036    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:34.745096    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:37.149252    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:40:37.150207    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:37.150207    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:40:39.911815    4632 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:40:39.911815    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:40.926656    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:43.364528    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:40:43.364528    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:43.364838    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:40:46.166271    4632 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:40:46.166271    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:47.178025    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:49.580101    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:40:49.580101    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:49.580101    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:40:52.320149    4632 main.go:141] libmachine: [stdout =====>] : 
	I0604 23:40:52.320219    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:53.331679    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:40:55.788072    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:40:55.788072    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:55.788072    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:40:58.579606    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:40:58.579606    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:40:58.582335    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:00.919607    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:00.919843    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:00.919843    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:03.739548    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:03.739548    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:03.740842    4632 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-022000\config.json ...
	I0604 23:41:03.743778    4632 machine.go:94] provisionDockerMachine start ...
	I0604 23:41:03.743892    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:06.110701    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:06.110701    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:06.111060    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:08.928420    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:08.928420    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:08.935493    4632 main.go:141] libmachine: Using SSH client type: native
	I0604 23:41:08.935757    4632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.129.218 22 <nil> <nil>}
	I0604 23:41:08.935757    4632 main.go:141] libmachine: About to run SSH command:
	hostname
	I0604 23:41:09.071426    4632 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0604 23:41:09.071583    4632 buildroot.go:166] provisioning hostname "multinode-022000"
	I0604 23:41:09.071583    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:11.421723    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:11.422515    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:11.422515    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:14.318870    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:14.319907    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:14.326833    4632 main.go:141] libmachine: Using SSH client type: native
	I0604 23:41:14.327713    4632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.129.218 22 <nil> <nil>}
	I0604 23:41:14.327713    4632 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-022000 && echo "multinode-022000" | sudo tee /etc/hostname
	I0604 23:41:14.500999    4632 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-022000
	
	I0604 23:41:14.501331    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:16.880059    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:16.880263    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:16.880263    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:19.742172    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:19.742172    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:19.748513    4632 main.go:141] libmachine: Using SSH client type: native
	I0604 23:41:19.748671    4632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.129.218 22 <nil> <nil>}
	I0604 23:41:19.748671    4632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0604 23:41:19.913472    4632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0604 23:41:19.913472    4632 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0604 23:41:19.913472    4632 buildroot.go:174] setting up certificates
	I0604 23:41:19.913472    4632 provision.go:84] configureAuth start
	I0604 23:41:19.913710    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:22.331378    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:22.331378    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:22.331628    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:25.259040    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:25.259102    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:25.259102    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:27.630714    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:27.630714    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:27.630714    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:30.491310    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:30.492175    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:30.492175    4632 provision.go:143] copyHostCerts
	I0604 23:41:30.492361    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0604 23:41:30.492657    4632 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0604 23:41:30.492657    4632 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0604 23:41:30.493188    4632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0604 23:41:30.494353    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0604 23:41:30.494353    4632 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0604 23:41:30.494353    4632 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0604 23:41:30.495204    4632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0604 23:41:30.496437    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0604 23:41:30.496734    4632 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0604 23:41:30.496777    4632 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0604 23:41:30.496971    4632 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0604 23:41:30.497912    4632 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-022000 san=[127.0.0.1 172.20.129.218 localhost minikube multinode-022000]
	I0604 23:41:30.786840    4632 provision.go:177] copyRemoteCerts
	I0604 23:41:30.798631    4632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0604 23:41:30.798631    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:33.191346    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:33.192357    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:33.192394    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:36.021545    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:36.022366    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:36.022534    4632 sshutil.go:53] new ssh client: &{IP:172.20.129.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:41:36.135043    4632 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.3363691s)
	I0604 23:41:36.135043    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0604 23:41:36.135043    4632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0604 23:41:36.196558    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0604 23:41:36.196558    4632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0604 23:41:36.244947    4632 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0604 23:41:36.245955    4632 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0604 23:41:36.304156    4632 provision.go:87] duration metric: took 16.3903137s to configureAuth
	I0604 23:41:36.304227    4632 buildroot.go:189] setting minikube options for container-runtime
	I0604 23:41:36.304627    4632 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:41:36.304627    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:38.638246    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:38.638543    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:38.638665    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:41.454763    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:41.454763    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:41.462205    4632 main.go:141] libmachine: Using SSH client type: native
	I0604 23:41:41.462387    4632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.129.218 22 <nil> <nil>}
	I0604 23:41:41.462387    4632 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0604 23:41:41.598683    4632 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0604 23:41:41.598683    4632 buildroot.go:70] root file system type: tmpfs
	I0604 23:41:41.599329    4632 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0604 23:41:41.599399    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:43.994556    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:43.995558    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:43.995594    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:41:46.836424    4632 main.go:141] libmachine: [stdout =====>] : 172.20.129.218
	
	I0604 23:41:46.836424    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:46.843651    4632 main.go:141] libmachine: Using SSH client type: native
	I0604 23:41:46.844496    4632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf6a540] 0xf6d120 <nil>  [] 0s} 172.20.129.218 22 <nil> <nil>}
	I0604 23:41:46.844650    4632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0604 23:41:47.015683    4632 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0604 23:41:47.015774    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:41:49.338427    4632 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:41:49.338462    4632 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:41:49.338755    4632 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-022000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-022000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-022000: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-022000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-022000	172.20.128.97
multinode-022000-m02	172.20.130.221
multinode-022000-m03	172.20.128.16

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 6 (13.315141s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:41:50.940251    7100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0604 23:42:04.046360    7100 status.go:417] kubeconfig endpoint: get endpoint: "multinode-022000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (257.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-628100 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-628100 --driver=hyperv: exit status 1 (4m59.5931287s)

                                                
                                                
-- stdout --
	* [NoKubernetes-628100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-628100" primary control-plane node in "NoKubernetes-628100" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:58:25.003777    3428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-628100 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-628100 -n NoKubernetes-628100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-628100 -n NoKubernetes-628100: exit status 7 (256.0337ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0605 00:03:24.584468    6752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-628100" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.86s)

                                                
                                    

Test pass (155/199)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.56
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 1.33
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.27
12 TestDownloadOnly/v1.30.1/json-events 17.27
13 TestDownloadOnly/v1.30.1/preload-exists 0.01
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.33
18 TestDownloadOnly/v1.30.1/DeleteAll 1.35
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.28
21 TestBinaryMirror 7.62
22 TestOffline 272.23
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
27 TestAddons/Setup 461.92
30 TestAddons/parallel/Ingress 74.83
31 TestAddons/parallel/InspektorGadget 28.84
32 TestAddons/parallel/MetricsServer 24.48
33 TestAddons/parallel/HelmTiller 31.06
35 TestAddons/parallel/CSI 86.29
36 TestAddons/parallel/Headlamp 40.23
37 TestAddons/parallel/CloudSpanner 22.49
38 TestAddons/parallel/LocalPath 95.81
39 TestAddons/parallel/NvidiaDevicePlugin 23.4
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 60.08
44 TestAddons/serial/GCPAuth/Namespaces 0.4
45 TestAddons/StoppedEnableDisable 58.27
47 TestCertExpiration 1097.62
48 TestDockerFlags 330.88
49 TestForceSystemdFlag 422.37
50 TestForceSystemdEnv 555.17
57 TestErrorSpam/start 18.45
58 TestErrorSpam/status 39.46
59 TestErrorSpam/pause 24.77
60 TestErrorSpam/unpause 25.55
61 TestErrorSpam/stop 65.95
64 TestFunctional/serial/CopySyncFile 0.04
65 TestFunctional/serial/StartWithProxy 258.56
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 134.56
68 TestFunctional/serial/KubeContext 0.15
69 TestFunctional/serial/KubectlGetPods 0.25
72 TestFunctional/serial/CacheCmd/cache/add_remote 28.61
73 TestFunctional/serial/CacheCmd/cache/add_local 11.96
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
75 TestFunctional/serial/CacheCmd/cache/list 0.28
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 10.5
77 TestFunctional/serial/CacheCmd/cache/cache_reload 40.55
78 TestFunctional/serial/CacheCmd/cache/delete 0.58
79 TestFunctional/serial/MinikubeKubectlCmd 0.55
81 TestFunctional/serial/ExtraConfig 136.01
82 TestFunctional/serial/ComponentHealth 0.2
83 TestFunctional/serial/LogsCmd 9.21
84 TestFunctional/serial/LogsFileCmd 11.54
85 TestFunctional/serial/InvalidService 21.95
91 TestFunctional/parallel/StatusCmd 45.43
95 TestFunctional/parallel/ServiceCmdConnect 28.97
96 TestFunctional/parallel/AddonsCmd 0.96
97 TestFunctional/parallel/PersistentVolumeClaim 48.98
99 TestFunctional/parallel/SSHCmd 21.7
100 TestFunctional/parallel/CpCmd 62.74
101 TestFunctional/parallel/MySQL 66.78
102 TestFunctional/parallel/FileSync 12.05
103 TestFunctional/parallel/CertSync 66.3
107 TestFunctional/parallel/NodeLabels 0.2
109 TestFunctional/parallel/NonActiveRuntimeDisabled 10.86
111 TestFunctional/parallel/License 3.63
112 TestFunctional/parallel/ServiceCmd/DeployApp 19.48
113 TestFunctional/parallel/ServiceCmd/List 15.21
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.57
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.73
119 TestFunctional/parallel/ServiceCmd/JSONOutput 14.36
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 12.52
128 TestFunctional/parallel/ProfileCmd/profile_list 12.78
130 TestFunctional/parallel/ProfileCmd/profile_json_output 12.19
132 TestFunctional/parallel/Version/short 0.58
133 TestFunctional/parallel/Version/components 8.64
134 TestFunctional/parallel/DockerEnv/powershell 47.34
135 TestFunctional/parallel/ImageCommands/ImageListShort 8.55
136 TestFunctional/parallel/ImageCommands/ImageListTable 8.12
137 TestFunctional/parallel/ImageCommands/ImageListJson 8.5
138 TestFunctional/parallel/ImageCommands/ImageListYaml 8.66
139 TestFunctional/parallel/ImageCommands/ImageBuild 28.87
140 TestFunctional/parallel/ImageCommands/Setup 4.41
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.58
142 TestFunctional/parallel/UpdateContextCmd/no_changes 2.98
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 3.08
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.96
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 20.75
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 29.59
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.12
148 TestFunctional/parallel/ImageCommands/ImageRemove 16.3
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.55
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10
151 TestFunctional/delete_addon-resizer_images 0.53
152 TestFunctional/delete_my-image_image 0.18
153 TestFunctional/delete_minikube_cached_images 0.2
157 TestMultiControlPlane/serial/StartCluster 768.27
158 TestMultiControlPlane/serial/DeployApp 13.37
160 TestMultiControlPlane/serial/AddWorkerNode 281.12
161 TestMultiControlPlane/serial/NodeLabels 0.31
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 32.47
166 TestImageBuild/serial/Setup 210.6
167 TestImageBuild/serial/NormalBuild 10.28
168 TestImageBuild/serial/BuildWithBuildArg 9.73
169 TestImageBuild/serial/BuildWithDockerIgnore 8.43
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.19
174 TestJSONOutput/start/Command 227.79
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 8.71
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 8.74
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 42.43
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 1.6
202 TestMainNoArgs 0.29
203 TestMinikubeProfile 562.67
206 TestMountStart/serial/StartWithMountFirst 167.91
207 TestMountStart/serial/VerifyMountFirst 10.44
208 TestMountStart/serial/StartWithMountSecond 172.28
209 TestMountStart/serial/VerifyMountSecond 10.55
210 TestMountStart/serial/DeleteFirst 30.39
211 TestMountStart/serial/VerifyMountPostDelete 10.23
212 TestMountStart/serial/Stop 28.69
213 TestMountStart/serial/RestartStopped 126.78
214 TestMountStart/serial/VerifyMountPostStop 10.1
217 TestMultiNode/serial/FreshStart2Nodes 453.77
218 TestMultiNode/serial/DeployApp2Nodes 9.73
220 TestMultiNode/serial/AddNode 252.13
221 TestMultiNode/serial/MultiNodeLabels 0.19
222 TestMultiNode/serial/ProfileList 12.92
223 TestMultiNode/serial/CopyFile 388.91
224 TestMultiNode/serial/StopNode 81.5
230 TestPreload 557.16
231 TestScheduledStopWindows 345.93
236 TestRunningBinaryUpgrade 1004.49
238 TestKubernetesUpgrade 1434.73
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
254 TestStoppedBinaryUpgrade/Setup 2.91
255 TestStoppedBinaryUpgrade/Upgrade 975.07
263 TestStoppedBinaryUpgrade/MinikubeLogs 10.16
x
+
TestDownloadOnly/v1.20.0/json-events (23.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-352000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (23.5492637s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-352000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-352000: exit status 85 (305.7402ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |          |
	|         | -p download-only-352000        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:29:37
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:29:37.981491    8208 out.go:291] Setting OutFile to fd 624 ...
	I0604 21:29:37.981852    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:29:37.981852    8208 out.go:304] Setting ErrFile to fd 628...
	I0604 21:29:37.981852    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0604 21:29:37.993869    8208 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0604 21:29:38.010350    8208 out.go:298] Setting JSON to true
	I0604 21:29:38.014354    8208 start.go:129] hostinfo: {"hostname":"minikube6","uptime":83827,"bootTime":1717452750,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 21:29:38.014354    8208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 21:29:38.021654    8208 out.go:97] [download-only-352000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 21:29:38.021863    8208 notify.go:220] Checking for updates...
	W0604 21:29:38.021863    8208 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0604 21:29:38.024312    8208 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:29:38.027309    8208 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 21:29:38.029678    8208 out.go:169] MINIKUBE_LOCATION=19024
	I0604 21:29:38.032234    8208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0604 21:29:38.037285    8208 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 21:29:38.040510    8208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:29:43.634610    8208 out.go:97] Using the hyperv driver based on user configuration
	I0604 21:29:43.634610    8208 start.go:297] selected driver: hyperv
	I0604 21:29:43.634610    8208 start.go:901] validating driver "hyperv" against <nil>
	I0604 21:29:43.634885    8208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:29:43.686592    8208 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0604 21:29:43.687627    8208 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 21:29:43.687627    8208 cni.go:84] Creating CNI manager for ""
	I0604 21:29:43.687627    8208 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0604 21:29:43.688171    8208 start.go:340] cluster config:
	{Name:download-only-352000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-352000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:29:43.689189    8208 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:29:43.692438    8208 out.go:97] Downloading VM boot image ...
	I0604 21:29:43.692670    8208 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19024/minikube-v1.33.1-1717518792-19024-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1717518792-19024-amd64.iso
	I0604 21:29:50.714682    8208 out.go:97] Starting "download-only-352000" primary control-plane node in "download-only-352000" cluster
	I0604 21:29:50.728945    8208 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0604 21:29:50.792961    8208 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0604 21:29:50.793072    8208 cache.go:56] Caching tarball of preloaded images
	I0604 21:29:50.793536    8208 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0604 21:29:50.796462    8208 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0604 21:29:50.796462    8208 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0604 21:29:50.866802    8208 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0604 21:29:54.431619    8208 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0604 21:29:54.438797    8208 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0604 21:29:55.549358    8208 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0604 21:29:55.552754    8208 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-352000\config.json ...
	I0604 21:29:55.553415    8208 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-352000\config.json: {Name:mkaf325473eb61735523c2f5d030a83cdda5cb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0604 21:29:55.553779    8208 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0604 21:29:55.554935    8208 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-352000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:30:01.542977   10560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3224261s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-352000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-352000: (1.2662897s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (17.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-033100 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-033100 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (17.2738031s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (17.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-033100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-033100: exit status 85 (318.9495ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:29 UTC |                     |
	|         | -p download-only-352000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| delete  | -p download-only-352000        | download-only-352000 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC | 04 Jun 24 21:30 UTC |
	| start   | -o=json --download-only        | download-only-033100 | minikube6\jenkins | v1.33.1 | 04 Jun 24 21:30 UTC |                     |
	|         | -p download-only-033100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/04 21:30:04
	Running on machine: minikube6
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0604 21:30:04.551715   10664 out.go:291] Setting OutFile to fd 740 ...
	I0604 21:30:04.552518   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:04.552518   10664 out.go:304] Setting ErrFile to fd 744...
	I0604 21:30:04.552518   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 21:30:04.578552   10664 out.go:298] Setting JSON to true
	I0604 21:30:04.582528   10664 start.go:129] hostinfo: {"hostname":"minikube6","uptime":83854,"bootTime":1717452750,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 21:30:04.582528   10664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 21:30:04.587965   10664 out.go:97] [download-only-033100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 21:30:04.588315   10664 notify.go:220] Checking for updates...
	I0604 21:30:04.590573   10664 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 21:30:04.592832   10664 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 21:30:04.595983   10664 out.go:169] MINIKUBE_LOCATION=19024
	I0604 21:30:04.598368   10664 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0604 21:30:04.603596   10664 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0604 21:30:04.604410   10664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0604 21:30:10.263388   10664 out.go:97] Using the hyperv driver based on user configuration
	I0604 21:30:10.263388   10664 start.go:297] selected driver: hyperv
	I0604 21:30:10.263388   10664 start.go:901] validating driver "hyperv" against <nil>
	I0604 21:30:10.263388   10664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0604 21:30:10.316065   10664 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0604 21:30:10.317430   10664 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0604 21:30:10.317964   10664 cni.go:84] Creating CNI manager for ""
	I0604 21:30:10.318195   10664 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0604 21:30:10.318316   10664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0604 21:30:10.318506   10664 start.go:340] cluster config:
	{Name:download-only-033100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717518322-19024@sha256:d2210ba725128d67c6173c8b8d82d6c8736e8dad7a6c389a278f795205c6764f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-033100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0604 21:30:10.318902   10664 iso.go:125] acquiring lock: {Name:mkd51e140550ee3ad29317eefa47594b071594dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0604 21:30:10.323193   10664 out.go:97] Starting "download-only-033100" primary control-plane node in "download-only-033100" cluster
	I0604 21:30:10.323308   10664 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:30:10.366369   10664 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 21:30:10.369909   10664 cache.go:56] Caching tarball of preloaded images
	I0604 21:30:10.370412   10664 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0604 21:30:10.372953   10664 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0604 21:30:10.373041   10664 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0604 21:30:10.445593   10664 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0604 21:30:19.393101   10664 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0604 21:30:19.393784   10664 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-033100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-033100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:30:21.758680    3968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3542203s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-033100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-033100: (1.2752908s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.28s)

                                                
                                    
x
+
TestBinaryMirror (7.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-211000 --alsologtostderr --binary-mirror http://127.0.0.1:62318 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-211000 --alsologtostderr --binary-mirror http://127.0.0.1:62318 --driver=hyperv: (6.7119718s)
helpers_test.go:175: Cleaning up "binary-mirror-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-211000
--- PASS: TestBinaryMirror (7.62s)

                                                
                                    
x
+
TestOffline (272.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-535100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-535100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m45.2844727s)
helpers_test.go:175: Cleaning up "offline-docker-535100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-535100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-535100: (46.9408431s)
--- PASS: TestOffline (272.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-369400
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-369400: exit status 85 (299.5256ms)

                                                
                                                
-- stdout --
	* Profile "addons-369400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-369400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:30:34.856677    4660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-369400
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-369400: exit status 85 (305.7268ms)

                                                
                                                
-- stdout --
	* Profile "addons-369400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-369400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:30:34.856511    6616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/Setup (461.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-369400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-369400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m41.9168887s)
--- PASS: TestAddons/Setup (461.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (74.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-369400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-369400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-369400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [26d9816e-eb29-429f-8b6b-b8200b9876ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [26d9816e-eb29-429f-8b6b-b8200b9876ec] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.011886s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.8450303s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-369400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0604 21:39:42.485142    8604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-369400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:288: (dbg) Done: kubectl --context addons-369400 replace --force -f testdata\ingress-dns-example-v1.yaml: (2.1523067s)
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 ip: (2.7530868s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.20.139.74
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable ingress-dns --alsologtostderr -v=1: (17.9823215s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable ingress --alsologtostderr -v=1: (24.3923416s)
--- PASS: TestAddons/parallel/Ingress (74.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (28.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-stm9h" [a1ede618-fdc3-4bc4-b626-755f3d539549] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0223368s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-369400
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-369400: (23.814981s)
--- PASS: TestAddons/parallel/InspektorGadget (28.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (24.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.0039ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-jxg2b" [30d4e9c9-aea8-427f-896f-d89162d50a24] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0091227s
addons_test.go:417: (dbg) Run:  kubectl --context addons-369400 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable metrics-server --alsologtostderr -v=1: (18.1212335s)
--- PASS: TestAddons/parallel/MetricsServer (24.48s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (31.06s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 19.8731ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-dxfrc" [4a947127-dc88-4d41-ad04-ab810ac930a2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0190892s
addons_test.go:475: (dbg) Run:  kubectl --context addons-369400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-369400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.2766831s)
addons_test.go:480: kubectl --context addons-369400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable helm-tiller --alsologtostderr -v=1: (17.7198112s)
--- PASS: TestAddons/parallel/HelmTiller (31.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (86.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 32.1725ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-369400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-369400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [663a3ba4-37c2-4332-a926-f1aaa9db95d6] Pending
helpers_test.go:344: "task-pv-pod" [663a3ba4-37c2-4332-a926-f1aaa9db95d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [663a3ba4-37c2-4332-a926-f1aaa9db95d6] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0065759s
addons_test.go:586: (dbg) Run:  kubectl --context addons-369400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-369400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-369400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-369400 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-369400 delete pod task-pv-pod: (1.0636417s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-369400 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-369400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-369400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8c5fa844-a1b7-453d-affe-1eb6fae2d609] Pending
helpers_test.go:344: "task-pv-pod-restore" [8c5fa844-a1b7-453d-affe-1eb6fae2d609] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8c5fa844-a1b7-453d-affe-1eb6fae2d609] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0217528s
addons_test.go:628: (dbg) Run:  kubectl --context addons-369400 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-369400 delete pod task-pv-pod-restore: (1.2982742s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-369400 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-369400 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.9789826s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable volumesnapshots --alsologtostderr -v=1: (16.5625917s)
--- PASS: TestAddons/parallel/CSI (86.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (40.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-369400 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-369400 --alsologtostderr -v=1: (18.1988189s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-hww9j" [72cff3f6-876a-4e46-a5b8-a62cf7699b11] Pending
helpers_test.go:344: "headlamp-7fc69f7444-hww9j" [72cff3f6-876a-4e46-a5b8-a62cf7699b11] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-hww9j" [72cff3f6-876a-4e46-a5b8-a62cf7699b11] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0224247s
--- PASS: TestAddons/parallel/Headlamp (40.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-57v62" [084f21cc-3cae-47db-9530-3db3df3010ef] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0150981s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-369400
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-369400: (17.4672917s)
--- PASS: TestAddons/parallel/CloudSpanner (22.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (95.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-369400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-369400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7bd83c11-b4f2-4684-8c1b-331e4f0920d7] Pending
helpers_test.go:344: "test-local-path" [7bd83c11-b4f2-4684-8c1b-331e4f0920d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7bd83c11-b4f2-4684-8c1b-331e4f0920d7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7bd83c11-b4f2-4684-8c1b-331e4f0920d7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0129827s
addons_test.go:992: (dbg) Run:  kubectl --context addons-369400 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 ssh "cat /opt/local-path-provisioner/pvc-d2e31ec4-d787-4fa8-8e02-97096b762939_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 ssh "cat /opt/local-path-provisioner/pvc-d2e31ec4-d787-4fa8-8e02-97096b762939_default_test-pvc/file1": (11.6006706s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-369400 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-369400 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m3.5440086s)
--- PASS: TestAddons/parallel/LocalPath (95.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (23.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tgrj7" [c981a798-507a-4114-8d58-41f28df585bb] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0186191s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-369400
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-369400: (17.3682838s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (23.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-hbcmp" [f390f031-9b3f-4987-a9fa-7a27ac18701f] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0145805s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (60.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 7.7171ms
addons_test.go:905: volcano-controller stabilized in 7.7171ms
addons_test.go:897: volcano-admission stabilized in 8.6164ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-vdbdr" [925cfada-4645-41b2-ae0e-aca902ce8e02] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.0198957s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-h4chg" [2843394a-568a-46fb-a514-5abe0b1d545a] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.0216131s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-h8n57" [ff74aa88-b552-4149-972e-f59d712d8474] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0182918s
addons_test.go:924: (dbg) Run:  kubectl --context addons-369400 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-369400 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-369400 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f8db59f2-939e-452b-bfba-438f6377a899] Pending
helpers_test.go:344: "test-job-nginx-0" [f8db59f2-939e-452b-bfba-438f6377a899] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f8db59f2-939e-452b-bfba-438f6377a899] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 16.0115862s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-369400 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-369400 addons disable volcano --alsologtostderr -v=1: (26.0872999s)
--- PASS: TestAddons/parallel/Volcano (60.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-369400 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-369400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (58.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-369400
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-369400: (44.4319319s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-369400
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-369400: (5.575825s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-369400
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-369400: (5.1951484s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-369400
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-369400: (3.0603152s)
--- PASS: TestAddons/StoppedEnableDisable (58.27s)

                                                
                                    
x
+
TestCertExpiration (1097.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-917500 --memory=2048 --cert-expiration=3m --driver=hyperv
E0605 00:08:08.915480   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0605 00:08:17.043799   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-917500 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m45.9177623s)
E0605 00:16:45.715452   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0605 00:18:00.343429   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0605 00:18:17.049027   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-917500 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-917500 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m40.8660298s)
helpers_test.go:175: Cleaning up "cert-expiration-917500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-917500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-917500: (50.8190237s)
--- PASS: TestCertExpiration (1097.62s)

                                                
                                    
x
+
TestDockerFlags (330.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-358900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-358900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m20.0019162s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-358900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-358900 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.6247603s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-358900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-358900 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (11.4230279s)
helpers_test.go:175: Cleaning up "docker-flags-358900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-358900
E0605 00:24:48.939452   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-358900: (48.828484s)
--- PASS: TestDockerFlags (330.88s)

                                                
                                    
x
+
TestForceSystemdFlag (422.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-670000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-670000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (6m3.8877473s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-670000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-670000 ssh "docker info --format {{.CgroupDriver}}": (10.4546097s)
helpers_test.go:175: Cleaning up "force-systemd-flag-670000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-670000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-670000: (48.0071292s)
--- PASS: TestForceSystemdFlag (422.37s)

                                                
                                    
x
+
TestForceSystemdEnv (555.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-388200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0605 00:01:20.327509   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0605 00:01:45.701836   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-388200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (8m16.0314707s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-388200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-388200 ssh "docker info --format {{.CgroupDriver}}": (10.6090641s)
helpers_test.go:175: Cleaning up "force-systemd-env-388200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-388200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-388200: (48.5321202s)
--- PASS: TestForceSystemdEnv (555.17s)

                                                
                                    
x
+
TestErrorSpam/start (18.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run: (6.13022s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run: (6.1440606s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 start --dry-run: (6.1501563s)
--- PASS: TestErrorSpam/start (18.45s)

                                                
                                    
x
+
TestErrorSpam/status (39.46s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status: (13.6715404s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status: (12.942496s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 status: (12.8367259s)
--- PASS: TestErrorSpam/status (39.46s)

                                                
                                    
x
+
TestErrorSpam/pause (24.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause: (8.3968457s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause: (8.159573s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 pause: (8.1974343s)
--- PASS: TestErrorSpam/pause (24.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (25.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause
E0604 21:48:16.969373   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:16.984263   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.000370   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.031826   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.079507   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.174032   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.348390   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:17.682791   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:18.335203   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:19.615545   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:48:22.190922   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause: (8.6650948s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause
E0604 21:48:27.326946   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause: (8.4979471s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause
E0604 21:48:37.580546   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 unpause: (8.3775004s)
--- PASS: TestErrorSpam/unpause (25.55s)

                                                
                                    
x
+
TestErrorSpam/stop (65.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop
E0604 21:48:58.071352   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop: (41.8063306s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop: (12.2118317s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop
E0604 21:49:39.034745   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-658400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-658400 stop: (11.9247258s)
--- PASS: TestErrorSpam/stop (65.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14064\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (258.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-235400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0604 21:51:00.968712   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:53:16.981143   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 21:53:44.818470   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-235400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m18.552931s)
--- PASS: TestFunctional/serial/StartWithProxy (258.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (134.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-235400 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-235400 --alsologtostderr -v=8: (2m14.5632782s)
functional_test.go:659: soft start took 2m14.5640537s for "functional-235400" cluster.
--- PASS: TestFunctional/serial/SoftStart (134.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-235400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (28.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:3.1: (9.6703699s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:3.3: (9.4635914s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cache add registry.k8s.io/pause:latest: (9.4768328s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (28.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-235400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2270355251\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-235400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2270355251\001: (2.4772487s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache add minikube-local-cache-test:functional-235400
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cache add minikube-local-cache-test:functional-235400: (8.9882799s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache delete minikube-local-cache-test:functional-235400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-235400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl images: (10.5025628s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (40.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh sudo docker rmi registry.k8s.io/pause:latest: (10.3823004s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (10.4660755s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 21:57:39.443134    9564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cache reload: (9.1831583s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (10.5122169s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (40.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.58s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 kubectl -- --context functional-235400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (136.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-235400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-235400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m16.0082121s)
functional_test.go:757: restart took 2m16.0082121s for "functional-235400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (136.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-235400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 logs: (9.2018295s)
--- PASS: TestFunctional/serial/LogsCmd (9.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3155438942\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3155438942\001\logs.txt: (11.5309585s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-235400 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-235400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-235400: exit status 115 (17.8337822s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.20.136.157:31991 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:01:27.223756    9740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-235400 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.95s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (45.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 status: (14.9406185s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.4529775s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 status -o json: (15.0342215s)
--- PASS: TestFunctional/parallel/StatusCmd (45.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-235400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-235400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-ds9cm" [af61436a-22d7-483d-89be-b3f1786c6138] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-ds9cm" [af61436a-22d7-483d-89be-b3f1786c6138] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.0178698s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 service hello-node-connect --url: (21.3604447s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.20.136.157:30736
functional_test.go:1671: http://172.20.136.157:30736: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-ds9cm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.20.136.157:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.20.136.157:30736
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.97s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [787187d8-02e6-447b-a0a1-dc664d9226e5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0087867s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-235400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-235400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-235400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-235400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [526648e2-3a21-4319-a4c0-e901d3edc4f3] Pending
helpers_test.go:344: "sp-pod" [526648e2-3a21-4319-a4c0-e901d3edc4f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [526648e2-3a21-4319-a4c0-e901d3edc4f3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0208675s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-235400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-235400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-235400 delete -f testdata/storage-provisioner/pod.yaml: (1.4360033s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-235400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ecf52371-5306-41fb-8517-7dd360329cdf] Pending
helpers_test.go:344: "sp-pod" [ecf52371-5306-41fb-8517-7dd360329cdf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ecf52371-5306-41fb-8517-7dd360329cdf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.0211166s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-235400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "echo hello": (10.8939778s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "cat /etc/hostname": (10.799866s)
--- PASS: TestFunctional/parallel/SSHCmd (21.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (62.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.7109933s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /home/docker/cp-test.txt": (10.6612957s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cp functional-235400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3716705439\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cp functional-235400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3716705439\001\cp-test.txt: (11.5981474s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /home/docker/cp-test.txt": (11.6119055s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5919089s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh -n functional-235400 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.5599226s)
--- PASS: TestFunctional/parallel/CpCmd (62.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (66.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-235400 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-mvvhd" [6497e9c4-ae08-42b2-b04d-03ac369d2d39] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-mvvhd" [6497e9c4-ae08-42b2-b04d-03ac369d2d39] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0175237s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;": exit status 1 (324.0531ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;": exit status 1 (336.1077ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;": exit status 1 (346.8141ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;": exit status 1 (399.6467ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;": exit status 1 (327.9683ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-235400 exec mysql-64454c8b5c-mvvhd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (66.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (12.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14064/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/test/nested/copy/14064/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/test/nested/copy/14064/hosts": (12.0477648s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (12.05s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14064.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/14064.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/14064.pem": (11.1166922s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14064.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /usr/share/ca-certificates/14064.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /usr/share/ca-certificates/14064.pem": (10.8623993s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.1200592s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/140642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/140642.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/140642.pem": (11.776468s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/140642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /usr/share/ca-certificates/140642.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /usr/share/ca-certificates/140642.pem": (10.8033742s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.6214422s)
--- PASS: TestFunctional/parallel/CertSync (66.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-235400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 ssh "sudo systemctl is-active crio": exit status 1 (10.8559933s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:03:28.491090    2020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6108646s)
--- PASS: TestFunctional/parallel/License (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-235400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-235400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-24vmc" [943a52df-2f88-4eb9-b4de-08fd1c89c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-24vmc" [943a52df-2f88-4eb9-b4de-08fd1c89c6ac] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.0195909s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 service list: (15.2055455s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (15.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3612: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 14008: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-235400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b6a510c3-c0c1-438b-b9b3-5dd9b1a506d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b6a510c3-c0c1-438b-b9b3-5dd9b1a506d3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.0216892s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 service list -o json: (14.3597008s)
functional_test.go:1490: Took "14.3614788s" to run "out/minikube-windows-amd64.exe -p functional-235400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-235400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13736: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.9341076s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.4386107s)
functional_test.go:1311: Took "12.4391525s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "342.0577ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.9202277s)
functional_test.go:1362: Took "11.9207761s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "269.6236ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 version -o=json --components
E0604 22:04:40.192094   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 version -o=json --components: (8.6359579s)
--- PASS: TestFunctional/parallel/Version/components (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (47.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-235400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-235400"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-235400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-235400": (31.4131881s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-235400 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-235400 docker-env | Invoke-Expression ; docker images": (15.9121459s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (47.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls --format short --alsologtostderr: (8.5524829s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-235400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-235400
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-235400
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-235400 image ls --format short --alsologtostderr:
W0604 22:06:13.892234    5500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 22:06:14.000241    5500 out.go:291] Setting OutFile to fd 904 ...
I0604 22:06:14.016077    5500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:14.016077    5500 out.go:304] Setting ErrFile to fd 900...
I0604 22:06:14.016077    5500 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:14.031079    5500 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.032078    5500 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.033088    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:16.596214    5500 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:16.596214    5500 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:16.614201    5500 ssh_runner.go:195] Run: systemctl --version
I0604 22:06:16.614201    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:19.098738    5500 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:19.099006    5500 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:19.099065    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
I0604 22:06:22.102256    5500 main.go:141] libmachine: [stdout =====>] : 172.20.136.157

                                                
                                                
I0604 22:06:22.102256    5500 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:22.103085    5500 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
I0604 22:06:22.227395    5500 ssh_runner.go:235] Completed: systemctl --version: (5.613151s)
I0604 22:06:22.237380    5500 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls --format table --alsologtostderr: (8.11506s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-235400 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 70ea0d8cc5300 | 48.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-235400 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 4f67c83422ec7 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/minikube-local-cache-test | functional-235400 | 030085366f3f8 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-235400 image ls --format table --alsologtostderr:
W0604 22:06:22.427266    2556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 22:06:22.519319    2556 out.go:291] Setting OutFile to fd 1092 ...
I0604 22:06:22.520259    2556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:22.520259    2556 out.go:304] Setting ErrFile to fd 1096...
I0604 22:06:22.520259    2556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:22.538311    2556 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:22.539326    2556 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:22.540350    2556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:24.976141    2556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:24.976141    2556 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:24.992612    2556 ssh_runner.go:195] Run: systemctl --version
I0604 22:06:24.992612    2556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:27.363361    2556 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:27.363425    2556 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:27.363998    2556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
I0604 22:06:30.227534    2556 main.go:141] libmachine: [stdout =====>] : 172.20.136.157

                                                
                                                
I0604 22:06:30.227768    2556 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:30.228016    2556 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
I0604 22:06:30.342041    2556 ssh_runner.go:235] Completed: systemctl --version: (5.3493881s)
I0604 22:06:30.352769    2556 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls --format json --alsologtostderr: (8.5037264s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-235400 image ls --format json --alsologtostderr:
[{"id":"70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34f
cc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"030085366f3f89425566b7f3074e4cb7c195d772b180797f9559a1de2ad7353f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:func
tional-235400"],"size":"30"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-235400"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-235400 image ls --format json --alsologtostderr:
W0604 22:06:13.891260    2888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 22:06:13.995501    2888 out.go:291] Setting OutFile to fd 744 ...
I0604 22:06:13.996384    2888 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:13.996384    2888 out.go:304] Setting ErrFile to fd 1028...
I0604 22:06:13.996462    2888 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:14.014084    2888 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.014084    2888 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.015159    2888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:16.596214    2888 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:16.596214    2888 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:16.611197    2888 ssh_runner.go:195] Run: systemctl --version
I0604 22:06:16.611197    2888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:19.097298    2888 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:19.097968    2888 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:19.097968    2888 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
I0604 22:06:22.067545    2888 main.go:141] libmachine: [stdout =====>] : 172.20.136.157

                                                
                                                
I0604 22:06:22.067620    2888 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:22.068124    2888 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
I0604 22:06:22.168864    2888 ssh_runner.go:235] Completed: systemctl --version: (5.5576251s)
I0604 22:06:22.178863    2888 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls --format yaml --alsologtostderr: (8.6564866s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-235400 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-235400
size: "32900000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 030085366f3f89425566b7f3074e4cb7c195d772b180797f9559a1de2ad7353f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-235400
size: "30"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-235400 image ls --format yaml --alsologtostderr:
W0604 22:06:13.894241    5584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 22:06:13.998040    5584 out.go:291] Setting OutFile to fd 980 ...
I0604 22:06:14.015159    5584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:14.015159    5584 out.go:304] Setting ErrFile to fd 936...
I0604 22:06:14.015159    5584 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:14.046093    5584 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.046093    5584 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:14.048084    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:16.678255    5584 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:16.678255    5584 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:16.693786    5584 ssh_runner.go:195] Run: systemctl --version
I0604 22:06:16.693786    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:19.210405    5584 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:19.210484    5584 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:19.210484    5584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
I0604 22:06:22.204323    5584 main.go:141] libmachine: [stdout =====>] : 172.20.136.157

                                                
                                                
I0604 22:06:22.204323    5584 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:22.204950    5584 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
I0604 22:06:22.327272    5584 ssh_runner.go:235] Completed: systemctl --version: (5.6334425s)
I0604 22:06:22.342266    5584 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-235400 ssh pgrep buildkitd: exit status 1 (10.4490946s)

                                                
                                                
** stderr ** 
	W0604 22:06:22.388256   14172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image build -t localhost/my-image:functional-235400 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image build -t localhost/my-image:functional-235400 testdata\build --alsologtostderr: (10.4276755s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-235400 image build -t localhost/my-image:functional-235400 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 73c52a1a8b61
---> Removed intermediate container 73c52a1a8b61
---> 43ea2aa44f4a
Step 3/3 : ADD content.txt /
---> 6168882a716d
Successfully built 6168882a716d
Successfully tagged localhost/my-image:functional-235400
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-235400 image build -t localhost/my-image:functional-235400 testdata\build --alsologtostderr:
W0604 22:06:32.824965   13708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0604 22:06:32.913341   13708 out.go:291] Setting OutFile to fd 1108 ...
I0604 22:06:32.936016   13708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:32.936105   13708 out.go:304] Setting ErrFile to fd 1112...
I0604 22:06:32.936105   13708 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0604 22:06:32.953095   13708 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:32.972313   13708 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0604 22:06:32.973199   13708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:35.409216   13708 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:35.409306   13708 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:35.422274   13708 ssh_runner.go:195] Run: systemctl --version
I0604 22:06:35.422274   13708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-235400 ).state
I0604 22:06:37.821201   13708 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0604 22:06:37.821385   13708 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:37.821459   13708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-235400 ).networkadapters[0]).ipaddresses[0]
I0604 22:06:40.599084   13708 main.go:141] libmachine: [stdout =====>] : 172.20.136.157

                                                
                                                
I0604 22:06:40.599084   13708 main.go:141] libmachine: [stderr =====>] : 
I0604 22:06:40.599308   13708 sshutil.go:53] new ssh client: &{IP:172.20.136.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-235400\id_rsa Username:docker}
I0604 22:06:40.703019   13708 ssh_runner.go:235] Completed: systemctl --version: (5.2805809s)
I0604 22:06:40.703140   13708 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.848578120.tar
I0604 22:06:40.718672   13708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0604 22:06:40.753318   13708 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.848578120.tar
I0604 22:06:40.763179   13708 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.848578120.tar: stat -c "%s %y" /var/lib/minikube/build/build.848578120.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.848578120.tar': No such file or directory
I0604 22:06:40.763316   13708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.848578120.tar --> /var/lib/minikube/build/build.848578120.tar (3072 bytes)
I0604 22:06:40.837613   13708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.848578120
I0604 22:06:40.876162   13708 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.848578120 -xf /var/lib/minikube/build/build.848578120.tar
I0604 22:06:40.897257   13708 docker.go:360] Building image: /var/lib/minikube/build/build.848578120
I0604 22:06:40.907786   13708 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-235400 /var/lib/minikube/build/build.848578120
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0604 22:06:43.036597   13708 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-235400 /var/lib/minikube/build/build.848578120: (2.128795s)
I0604 22:06:43.053757   13708 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.848578120
I0604 22:06:43.093230   13708 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.848578120.tar
I0604 22:06:43.114722   13708 build_images.go:217] Built localhost/my-image:functional-235400 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.848578120.tar
I0604 22:06:43.114908   13708 build_images.go:133] succeeded building to: functional-235400
I0604 22:06:43.114941   13708 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (7.9926832s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.1267875s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-235400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr: (16.2980465s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (8.2825502s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2: (2.9818338s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2: (3.0787498s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 update-context --alsologtostderr -v=2: (2.9577941s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr: (12.6761381s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (8.0731639s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (20.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.1172086s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-235400
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image load --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr: (16.1936918s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (8.0134632s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (29.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image save gcr.io/google-containers/addon-resizer:functional-235400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image save gcr.io/google-containers/addon-resizer:functional-235400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.1203748s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image rm gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image rm gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr: (8.3346974s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (7.9594476s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.6301138s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image ls: (7.9201081s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-235400
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-235400 image save --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-235400 image save --daemon gcr.io/google-containers/addon-resizer:functional-235400 --alsologtostderr: (9.5695305s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-235400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.53s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-235400
--- PASS: TestFunctional/delete_addon-resizer_images (0.53s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-235400
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-235400
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (768.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-609500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0604 22:11:45.641328   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:45.656655   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:45.684594   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:45.713948   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:45.756890   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:45.848889   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:46.013035   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:46.338904   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:46.991906   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:48.281286   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:50.841690   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:11:55.974468   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:12:06.216229   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:12:26.698924   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:13:07.660950   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:13:16.980546   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 22:14:29.582317   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:16:45.647155   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:17:13.438489   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:18:16.990770   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 22:21:20.202580   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-609500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (12m8.2125763s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 status -v=7 --alsologtostderr
E0604 22:21:45.643439   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 status -v=7 --alsologtostderr: (40.0504997s)
--- PASS: TestMultiControlPlane/serial/StartCluster (768.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- rollout status deployment/busybox: (3.891268s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- nslookup kubernetes.io: (2.0214404s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- nslookup kubernetes.io: (1.6267996s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-gbl9h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-m2dsk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-609500 -- exec busybox-fc5497c4f-qm589 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (281.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-609500 -v=7 --alsologtostderr
E0604 22:26:45.643286   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-609500 -v=7 --alsologtostderr: (3m47.0006604s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-609500 status -v=7 --alsologtostderr
E0604 22:28:08.811753   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-609500 status -v=7 --alsologtostderr: (54.1221205s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (281.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-609500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (32.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0604 22:28:16.997022   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (32.4713475s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (32.47s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (210.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-917700 --driver=hyperv
E0604 22:43:17.000758   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 22:44:48.827781   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-917700 --driver=hyperv: (3m30.5962881s)
--- PASS: TestImageBuild/serial/Setup (210.60s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-917700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-917700: (10.2704127s)
--- PASS: TestImageBuild/serial/NormalBuild (10.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-917700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-917700: (9.7330605s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-917700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-917700: (8.4228738s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-917700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-917700: (8.1778433s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.19s)

                                                
                                    
x
+
TestJSONOutput/start/Command (227.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-257300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0604 22:48:17.008088   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-257300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m47.7865575s)
--- PASS: TestJSONOutput/start/Command (227.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-257300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-257300 --output=json --user=testUser: (8.7058891s)
--- PASS: TestJSONOutput/pause/Command (8.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-257300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-257300 --output=json --user=testUser: (8.7369456s)
--- PASS: TestJSONOutput/unpause/Command (8.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (42.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-257300 --output=json --user=testUser
E0604 22:51:45.659195   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-257300 --output=json --user=testUser: (42.4247393s)
--- PASS: TestJSONOutput/stop/Command (42.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.6s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-904400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-904400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (307.4329ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"750b17ce-8af6-4a66-9b85-7fa1ef93ce18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-904400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"440937c4-7374-42f1-a632-b2eeb2a87365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"388dffb6-fda0-4737-9ab9-0efdf5d7446a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"15e91dc0-e9e8-4428-b3b2-2711669a39f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"bbae52c7-cf53-4914-95b6-c28556cbb069","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19024"}}
	{"specversion":"1.0","id":"37b7a0f8-48b5-495c-9ca1-bf7e7e3b877b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a72e6caa-4f04-4812-944b-ac23c34d1249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:52:24.620181    2568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-904400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-904400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-904400: (1.2892497s)
--- PASS: TestErrorJSONOutput (1.60s)

                                                
                                    
x
+
TestMainNoArgs (0.29s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.29s)

                                                
                                    
x
+
TestMinikubeProfile (562.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-203900 --driver=hyperv
E0604 22:53:16.999335   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 22:54:40.245429   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-203900 --driver=hyperv: (3m34.6073107s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-203900 --driver=hyperv
E0604 22:56:45.656675   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 22:58:17.010899   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-203900 --driver=hyperv: (3m33.6588051s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-203900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.0607172s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-203900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.7525403s)
helpers_test.go:175: Cleaning up "second-203900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-203900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-203900: (46.9196354s)
helpers_test.go:175: Cleaning up "first-203900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-203900
E0604 23:01:28.839969   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 23:01:45.660797   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-203900: (40.6967534s)
--- PASS: TestMinikubeProfile (562.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (167.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-821000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0604 23:03:17.016749   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-821000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m46.8974309s)
--- PASS: TestMountStart/serial/StartWithMountFirst (167.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-821000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-821000 ssh -- ls /minikube-host: (10.4417121s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (172.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-821000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0604 23:06:45.676550   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-821000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m51.2788965s)
--- PASS: TestMountStart/serial/StartWithMountSecond (172.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (10.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host: (10.549094s)
--- PASS: TestMountStart/serial/VerifyMountSecond (10.55s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.39s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-821000 --alsologtostderr -v=5
E0604 23:08:17.013255   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-821000 --alsologtostderr -v=5: (30.39358s)
--- PASS: TestMountStart/serial/DeleteFirst (30.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host: (10.2234953s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.69s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-821000
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-821000: (28.6779494s)
--- PASS: TestMountStart/serial/Stop (28.69s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (126.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-821000
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-821000: (2m5.7766757s)
--- PASS: TestMountStart/serial/RestartStopped (126.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-821000 ssh -- ls /minikube-host: (10.1031542s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (453.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-022000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0604 23:13:17.012776   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 23:16:45.665918   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 23:18:08.862313   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 23:18:17.010943   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-022000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m7.3391652s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr: (26.4302734s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (453.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- rollout status deployment/busybox: (3.5001316s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- nslookup kubernetes.io: (1.7044953s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-8bcjx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-022000 -- exec busybox-fc5497c4f-cbgjv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (252.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-022000 -v 3 --alsologtostderr
E0604 23:21:45.682292   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 23:23:17.022455   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-022000 -v 3 --alsologtostderr: (3m33.3015562s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr: (38.8062898s)
--- PASS: TestMultiNode/serial/AddNode (252.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-022000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (12.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.913869s)
--- PASS: TestMultiNode/serial/ProfileList (12.92s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (388.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 status --output json --alsologtostderr: (38.7397808s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000:/home/docker/cp-test.txt: (10.2259282s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt": (10.2224567s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000.txt: (10.1046741s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt": (10.1301167s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt multinode-022000-m02:/home/docker/cp-test_multinode-022000_multinode-022000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt multinode-022000-m02:/home/docker/cp-test_multinode-022000_multinode-022000-m02.txt: (17.927436s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt"
E0604 23:26:45.684681   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt": (10.0901601s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test_multinode-022000_multinode-022000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test_multinode-022000_multinode-022000-m02.txt": (10.1696467s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt multinode-022000-m03:/home/docker/cp-test_multinode-022000_multinode-022000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000:/home/docker/cp-test.txt multinode-022000-m03:/home/docker/cp-test_multinode-022000_multinode-022000-m03.txt: (17.5310889s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test.txt": (10.2419093s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test_multinode-022000_multinode-022000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test_multinode-022000_multinode-022000-m03.txt": (10.3525345s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000-m02:/home/docker/cp-test.txt: (10.2151442s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt": (10.2955769s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m02.txt
E0604 23:28:00.284513   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m02.txt: (10.2096753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt": (10.2561318s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt multinode-022000:/home/docker/cp-test_multinode-022000-m02_multinode-022000.txt
E0604 23:28:17.028628   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt multinode-022000:/home/docker/cp-test_multinode-022000-m02_multinode-022000.txt: (17.8666837s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt": (10.0715937s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test_multinode-022000-m02_multinode-022000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test_multinode-022000-m02_multinode-022000.txt": (10.0874479s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt multinode-022000-m03:/home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m02:/home/docker/cp-test.txt multinode-022000-m03:/home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt: (17.6123053s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test.txt": (10.0994847s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test_multinode-022000-m02_multinode-022000-m03.txt": (10.2563293s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp testdata\cp-test.txt multinode-022000-m03:/home/docker/cp-test.txt: (10.0514347s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt": (10.2471297s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2973253033\001\cp-test_multinode-022000-m03.txt: (10.1981114s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt": (10.0643471s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt multinode-022000:/home/docker/cp-test_multinode-022000-m03_multinode-022000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt multinode-022000:/home/docker/cp-test_multinode-022000-m03_multinode-022000.txt: (17.6806643s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt": (10.0723912s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test_multinode-022000-m03_multinode-022000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000 "sudo cat /home/docker/cp-test_multinode-022000-m03_multinode-022000.txt": (9.9795373s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt multinode-022000-m02:/home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 cp multinode-022000-m03:/home/docker/cp-test.txt multinode-022000-m02:/home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt: (17.5152222s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m03 "sudo cat /home/docker/cp-test.txt": (10.0768764s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 ssh -n multinode-022000-m02 "sudo cat /home/docker/cp-test_multinode-022000-m03_multinode-022000-m02.txt": (10.1453464s)
--- PASS: TestMultiNode/serial/CopyFile (388.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (81.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 node stop m03
E0604 23:31:45.677414   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-022000 node stop m03: (25.6251886s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-022000 status: exit status 7 (27.7252289s)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:31:54.779088   11164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-022000 status --alsologtostderr: exit status 7 (28.1424075s)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:32:22.512027    2768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 23:32:22.602412    2768 out.go:291] Setting OutFile to fd 1196 ...
	I0604 23:32:22.602412    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:32:22.603014    2768 out.go:304] Setting ErrFile to fd 1304...
	I0604 23:32:22.603014    2768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 23:32:22.618674    2768 out.go:298] Setting JSON to false
	I0604 23:32:22.618674    2768 mustload.go:65] Loading cluster: multinode-022000
	I0604 23:32:22.618874    2768 notify.go:220] Checking for updates...
	I0604 23:32:22.619637    2768 config.go:182] Loaded profile config "multinode-022000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 23:32:22.619740    2768 status.go:255] checking status of multinode-022000 ...
	I0604 23:32:22.621088    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:32:24.949041    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:24.963709    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:24.963709    2768 status.go:330] multinode-022000 host status = "Running" (err=<nil>)
	I0604 23:32:24.963894    2768 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:32:24.964747    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:32:27.365951    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:27.365951    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:27.380942    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:32:30.304636    2768 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:32:30.304636    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:30.304636    2768 host.go:66] Checking if "multinode-022000" exists ...
	I0604 23:32:30.321034    2768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:32:30.321034    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000 ).state
	I0604 23:32:32.711729    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:32.711799    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:32.711799    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000 ).networkadapters[0]).ipaddresses[0]
	I0604 23:32:35.437622    2768 main.go:141] libmachine: [stdout =====>] : 172.20.128.97
	
	I0604 23:32:35.437622    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:35.439349    2768 sshutil.go:53] new ssh client: &{IP:172.20.128.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000\id_rsa Username:docker}
	I0604 23:32:35.542989    2768 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.2219164s)
	I0604 23:32:35.559083    2768 ssh_runner.go:195] Run: systemctl --version
	I0604 23:32:35.587517    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:32:35.617965    2768 kubeconfig.go:125] found "multinode-022000" server: "https://172.20.128.97:8443"
	I0604 23:32:35.617965    2768 api_server.go:166] Checking apiserver status ...
	I0604 23:32:35.629564    2768 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0604 23:32:35.673105    2768 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup
	W0604 23:32:35.696154    2768 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0604 23:32:35.713823    2768 ssh_runner.go:195] Run: ls
	I0604 23:32:35.722092    2768 api_server.go:253] Checking apiserver healthz at https://172.20.128.97:8443/healthz ...
	I0604 23:32:35.732415    2768 api_server.go:279] https://172.20.128.97:8443/healthz returned 200:
	ok
	I0604 23:32:35.732415    2768 status.go:422] multinode-022000 apiserver status = Running (err=<nil>)
	I0604 23:32:35.732415    2768 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:32:35.732415    2768 status.go:255] checking status of multinode-022000-m02 ...
	I0604 23:32:35.734634    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:32:38.027402    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:38.027402    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:38.040125    2768 status.go:330] multinode-022000-m02 host status = "Running" (err=<nil>)
	I0604 23:32:38.040125    2768 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:32:38.040926    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:32:40.338811    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:40.351420    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:40.351535    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:32:43.068082    2768 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:32:43.068082    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:43.068082    2768 host.go:66] Checking if "multinode-022000-m02" exists ...
	I0604 23:32:43.084910    2768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0604 23:32:43.084910    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m02 ).state
	I0604 23:32:45.351223    2768 main.go:141] libmachine: [stdout =====>] : Running
	
	I0604 23:32:45.351223    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:45.363910    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-022000-m02 ).networkadapters[0]).ipaddresses[0]
	I0604 23:32:48.106975    2768 main.go:141] libmachine: [stdout =====>] : 172.20.130.221
	
	I0604 23:32:48.106975    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:48.119992    2768 sshutil.go:53] new ssh client: &{IP:172.20.130.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-022000-m02\id_rsa Username:docker}
	I0604 23:32:48.216409    2768 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1313902s)
	I0604 23:32:48.231385    2768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0604 23:32:48.259954    2768 status.go:257] multinode-022000-m02 status: &{Name:multinode-022000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0604 23:32:48.260070    2768 status.go:255] checking status of multinode-022000-m03 ...
	I0604 23:32:48.261697    2768 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-022000-m03 ).state
	I0604 23:32:50.489446    2768 main.go:141] libmachine: [stdout =====>] : Off
	
	I0604 23:32:50.489446    2768 main.go:141] libmachine: [stderr =====>] : 
	I0604 23:32:50.489446    2768 status.go:330] multinode-022000-m03 host status = "Stopped" (err=<nil>)
	I0604 23:32:50.489446    2768 status.go:343] host is not running, skipping remaining checks
	I0604 23:32:50.502189    2768 status.go:257] multinode-022000-m03 status: &{Name:multinode-022000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (81.50s)

                                                
                                    
x
+
TestPreload (557.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-842700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0604 23:44:40.306439   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
E0604 23:46:45.683737   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-842700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m54.2744949s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-842700 image pull gcr.io/k8s-minikube/busybox
E0604 23:48:17.026828   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-842700 image pull gcr.io/k8s-minikube/busybox: (9.0558813s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-842700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-842700: (41.3121758s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-842700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0604 23:51:28.892669   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
E0604 23:51:45.691587   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-842700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m40.4117925s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-842700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-842700 image list: (7.9075183s)
helpers_test.go:175: Cleaning up "test-preload-842700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-842700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-842700: (44.1683593s)
--- PASS: TestPreload (557.16s)

                                                
                                    
x
+
TestScheduledStopWindows (345.93s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-435400 --memory=2048 --driver=hyperv
E0604 23:53:17.034411   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-435400 --memory=2048 --driver=hyperv: (3m29.8788039s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-435400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-435400 --schedule 5m: (11.4132955s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-435400 -n scheduled-stop-435400
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-435400 -n scheduled-stop-435400: exit status 1 (10.0052663s)

                                                
                                                
** stderr ** 
	W0604 23:56:19.892684    8056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-435400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-435400 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.1215338s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-435400 --schedule 5s
E0604 23:56:45.697350   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-435400 --schedule 5s: (11.3320348s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-435400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-435400: exit status 7 (2.59308s)

                                                
                                                
-- stdout --
	scheduled-stop-435400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:57:51.375625    6760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-435400 -n scheduled-stop-435400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-435400 -n scheduled-stop-435400: exit status 7 (2.5703995s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:57:53.992532   13508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-435400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-435400
E0604 23:58:17.029543   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-435400: (27.9769606s)
--- PASS: TestScheduledStopWindows (345.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1004.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.4106392682.exe start -p running-upgrade-693900 --memory=2200 --vm-driver=hyperv
E0605 00:03:17.038879   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.4106392682.exe start -p running-upgrade-693900 --memory=2200 --vm-driver=hyperv: (6m25.4392091s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-693900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0605 00:11:45.693407   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-693900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m2.5370866s)
helpers_test.go:175: Cleaning up "running-upgrade-693900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-693900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-693900: (1m15.2817149s)
--- PASS: TestRunningBinaryUpgrade (1004.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (1434.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (8m26.0460099s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-881400
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-881400: (42.4193301s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-881400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-881400 status --format={{.Host}}: exit status 7 (2.6355816s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0605 00:12:33.999765    7516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
E0605 00:13:17.045679   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (7m23.4964204s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-881400 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (318.8125ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0605 00:20:00.342989   12540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-881400
	    minikube start -p kubernetes-upgrade-881400 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8814002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-881400 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-881400 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperv: (6m31.887151s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-881400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-881400
E0605 00:26:45.708794   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-881400: (47.719796s)
--- PASS: TestKubernetesUpgrade (1434.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-628100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-628100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (430.8804ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-628100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 23:58:24.583113    7448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (975.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.590170011.exe start -p stopped-upgrade-216900 --memory=2200 --vm-driver=hyperv
E0605 00:06:45.703548   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.590170011.exe start -p stopped-upgrade-216900 --memory=2200 --vm-driver=hyperv: (8m47.5240259s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.590170011.exe -p stopped-upgrade-216900 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.590170011.exe -p stopped-upgrade-216900 stop: (38.6374224s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-216900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-216900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m48.8962741s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (975.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-216900
E0605 00:21:45.697818   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-235400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-216900: (10.1516401s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.16s)

                                                
                                    

Test skip (30/199)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-235400 --alsologtostderr -v=1]
E0604 22:03:16.990545   14064 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-369400\client.crt: The system cannot find the path specified.
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-235400 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 13664: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-235400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-235400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0242228s)

                                                
                                                
-- stdout --
	* [functional-235400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:02:48.512038   13744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 22:02:48.633805   13744 out.go:291] Setting OutFile to fd 764 ...
	I0604 22:02:48.635315   13744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:02:48.635315   13744 out.go:304] Setting ErrFile to fd 744...
	I0604 22:02:48.635382   13744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:02:48.666235   13744 out.go:298] Setting JSON to false
	I0604 22:02:48.669365   13744 start.go:129] hostinfo: {"hostname":"minikube6","uptime":85818,"bootTime":1717452750,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 22:02:48.669365   13744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 22:02:48.674235   13744 out.go:177] * [functional-235400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 22:02:48.676538   13744 notify.go:220] Checking for updates...
	I0604 22:02:48.681430   13744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:02:48.684237   13744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 22:02:48.687239   13744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 22:02:48.690150   13744 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 22:02:48.693157   13744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 22:02:48.697137   13744 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:02:48.697806   13744 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-235400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-235400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0429782s)

                                                
                                                
-- stdout --
	* [functional-235400] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19024
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0604 22:02:54.486338    3612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0604 22:02:54.570698    3612 out.go:291] Setting OutFile to fd 960 ...
	I0604 22:02:54.571792    3612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:02:54.571792    3612 out.go:304] Setting ErrFile to fd 772...
	I0604 22:02:54.571792    3612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0604 22:02:54.589975    3612 out.go:298] Setting JSON to false
	I0604 22:02:54.602661    3612 start.go:129] hostinfo: {"hostname":"minikube6","uptime":85824,"bootTime":1717452750,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4474 Build 19045.4474","kernelVersion":"10.0.19045.4474 Build 19045.4474","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0604 22:02:54.602847    3612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0604 22:02:54.607834    3612 out.go:177] * [functional-235400] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4474 Build 19045.4474
	I0604 22:02:54.612617    3612 notify.go:220] Checking for updates...
	I0604 22:02:54.615720    3612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0604 22:02:54.620596    3612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0604 22:02:54.623154    3612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0604 22:02:54.623562    3612 out.go:177]   - MINIKUBE_LOCATION=19024
	I0604 22:02:54.628390    3612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0604 22:02:54.628917    3612 config.go:182] Loaded profile config "functional-235400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0604 22:02:54.632428    3612 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard