Test Report: Hyper-V_Windows 18966

                    
                      6c595620fab5adb75898ef5927d180f0ecb72463:2024-05-28:34666
                    
                

Test fail (13/200)

x
+
TestAddons/parallel/Registry (73.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 11.2149ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-65m8n" [387da1dd-fa4b-4a21-b077-75f42bde981c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0238522s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hdqzh" [b69c4801-6860-4538-809b-017df606f39e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0174853s
addons_test.go:342: (dbg) Run:  kubectl --context addons-854100 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-854100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-854100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.2820035s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 ip: (2.6946957s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0528 13:30:22.107103    5300 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-854100 ip"
2024/05/28 13:30:24 [DEBUG] GET http://172.18.209.103:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable registry --alsologtostderr -v=1: (15.9848667s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-854100 -n addons-854100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-854100 -n addons-854100: (12.9556126s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 logs -n 25: (10.6942833s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:21 PDT |                     |
	|         | -p download-only-696000              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| delete  | -p download-only-696000              | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| start   | -o=json --download-only              | download-only-761200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT |                     |
	|         | -p download-only-761200              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| delete  | -p download-only-761200              | download-only-761200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| delete  | -p download-only-696000              | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| delete  | -p download-only-761200              | download-only-761200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| start   | --download-only -p                   | binary-mirror-084900 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT |                     |
	|         | binary-mirror-084900                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:52656               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-084900              | binary-mirror-084900 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| addons  | enable dashboard -p                  | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT |                     |
	|         | addons-854100                        |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT |                     |
	|         | addons-854100                        |                      |                   |         |                     |                     |
	| start   | -p addons-854100 --wait=true         | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:30 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT | 28 May 24 13:30 PDT |
	|         | -p addons-854100                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT | 28 May 24 13:30 PDT |
	|         | addons-854100                        |                      |                   |         |                     |                     |
	| addons  | addons-854100 addons disable         | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT | 28 May 24 13:30 PDT |
	|         | helm-tiller --alsologtostderr        |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| ip      | addons-854100 ip                     | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT | 28 May 24 13:30 PDT |
	| addons  | addons-854100 addons disable         | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT | 28 May 24 13:30 PDT |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-854100        | minikube1\jenkins | v1.33.1 | 28 May 24 13:30 PDT |                     |
	|         | -p addons-854100                     |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 13:22:45
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 13:22:45.088784   14668 out.go:291] Setting OutFile to fd 724 ...
	I0528 13:22:45.089378   14668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:22:45.089378   14668 out.go:304] Setting ErrFile to fd 824...
	I0528 13:22:45.089378   14668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:22:45.112904   14668 out.go:298] Setting JSON to false
	I0528 13:22:45.115727   14668 start.go:129] hostinfo: {"hostname":"minikube1","uptime":529,"bootTime":1716927235,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:22:45.115727   14668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:22:45.123635   14668 out.go:177] * [addons-854100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 13:22:45.127948   14668 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:22:45.127948   14668 notify.go:220] Checking for updates...
	I0528 13:22:45.131656   14668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 13:22:45.134223   14668 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:22:45.136220   14668 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 13:22:45.139047   14668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 13:22:45.145102   14668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 13:22:50.349813   14668 out.go:177] * Using the hyperv driver based on user configuration
	I0528 13:22:50.354216   14668 start.go:297] selected driver: hyperv
	I0528 13:22:50.356067   14668 start.go:901] validating driver "hyperv" against <nil>
	I0528 13:22:50.356067   14668 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 13:22:50.406099   14668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 13:22:50.406844   14668 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 13:22:50.406844   14668 cni.go:84] Creating CNI manager for ""
	I0528 13:22:50.406844   14668 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:22:50.406844   14668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 13:22:50.406844   14668 start.go:340] cluster config:
	{Name:addons-854100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-854100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:22:50.407931   14668 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 13:22:50.413112   14668 out.go:177] * Starting "addons-854100" primary control-plane node in "addons-854100" cluster
	I0528 13:22:50.418619   14668 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:22:50.419747   14668 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 13:22:50.419747   14668 cache.go:56] Caching tarball of preloaded images
	I0528 13:22:50.420552   14668 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 13:22:50.420552   14668 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 13:22:50.421523   14668 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\config.json ...
	I0528 13:22:50.422122   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\config.json: {Name:mk3f93a8d958980c966924b15b3dc408b71f3d02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:22:50.422310   14668 start.go:360] acquireMachinesLock for addons-854100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 13:22:50.422310   14668 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-854100"
	I0528 13:22:50.422310   14668 start.go:93] Provisioning new machine with config: &{Name:addons-854100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:addons-854100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 13:22:50.423663   14668 start.go:125] createHost starting for "" (driver="hyperv")
	I0528 13:22:50.423898   14668 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0528 13:22:50.423898   14668 start.go:159] libmachine.API.Create for "addons-854100" (driver="hyperv")
	I0528 13:22:50.423898   14668 client.go:168] LocalClient.Create starting
	I0528 13:22:50.429285   14668 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 13:22:50.502180   14668 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 13:22:50.586371   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 13:22:52.663488   14668 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 13:22:52.663543   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:22:52.663543   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 13:22:54.373735   14668 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 13:22:54.382618   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:22:54.382700   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 13:22:55.802630   14668 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 13:22:55.802630   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:22:55.809147   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 13:22:59.310646   14668 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 13:22:59.310646   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:22:59.324677   14668 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 13:22:59.787129   14668 main.go:141] libmachine: Creating SSH key...
	I0528 13:22:59.845021   14668 main.go:141] libmachine: Creating VM...
	I0528 13:22:59.845021   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 13:23:02.620589   14668 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 13:23:02.631303   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:02.631525   14668 main.go:141] libmachine: Using switch "Default Switch"
	I0528 13:23:02.631525   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 13:23:04.368290   14668 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 13:23:04.368355   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:04.368355   14668 main.go:141] libmachine: Creating VHD
	I0528 13:23:04.368355   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 13:23:08.134809   14668 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6ED8D7ED-39C4-45CE-89CB-76B4D72AE6CB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 13:23:08.145773   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:08.145773   14668 main.go:141] libmachine: Writing magic tar header
	I0528 13:23:08.145773   14668 main.go:141] libmachine: Writing SSH key tar header
	I0528 13:23:08.154912   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 13:23:11.297300   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:11.306503   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:11.306503   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\disk.vhd' -SizeBytes 20000MB
	I0528 13:23:13.849936   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:13.849936   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:13.849936   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-854100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0528 13:23:17.483775   14668 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-854100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 13:23:17.483775   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:17.483775   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-854100 -DynamicMemoryEnabled $false
	I0528 13:23:19.647224   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:19.647224   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:19.647224   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-854100 -Count 2
	I0528 13:23:21.773898   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:21.773898   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:21.773898   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-854100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\boot2docker.iso'
	I0528 13:23:24.296266   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:24.296266   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:24.296266   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-854100 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\disk.vhd'
	I0528 13:23:26.852705   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:26.863215   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:26.863215   14668 main.go:141] libmachine: Starting VM...
	I0528 13:23:26.863314   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-854100
	I0528 13:23:29.905481   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:29.911530   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:29.911530   14668 main.go:141] libmachine: Waiting for host to start...
	I0528 13:23:29.911657   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:32.191386   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:32.191386   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:32.191386   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:23:34.695588   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:34.695588   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:35.704982   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:37.956885   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:37.967880   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:37.967880   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:23:40.475080   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:40.475080   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:41.499371   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:43.664128   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:43.664216   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:43.664284   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:23:46.130029   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:46.130029   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:47.138561   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:49.276071   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:49.276071   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:49.279791   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:23:51.706487   14668 main.go:141] libmachine: [stdout =====>] : 
	I0528 13:23:51.706487   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:52.716027   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:54.915911   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:54.920139   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:54.920291   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:23:57.324093   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:23:57.324225   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:57.324473   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:23:59.337562   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:23:59.337860   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:23:59.337935   14668 machine.go:94] provisionDockerMachine start ...
	I0528 13:23:59.337935   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:01.426912   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:01.426912   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:01.426912   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:03.928549   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:03.937369   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:03.943384   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:03.952707   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:03.952707   14668 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 13:24:04.081877   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 13:24:04.081877   14668 buildroot.go:166] provisioning hostname "addons-854100"
	I0528 13:24:04.081877   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:06.119951   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:06.120666   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:06.120666   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:08.531937   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:08.531937   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:08.549474   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:08.549951   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:08.550042   14668 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-854100 && echo "addons-854100" | sudo tee /etc/hostname
	I0528 13:24:08.709152   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-854100
	
	I0528 13:24:08.709152   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:10.732462   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:10.732462   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:10.742815   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:13.212343   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:13.212407   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:13.218984   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:13.218984   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:13.218984   14668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-854100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-854100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-854100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 13:24:13.368006   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 13:24:13.368114   14668 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 13:24:13.368248   14668 buildroot.go:174] setting up certificates
	I0528 13:24:13.368248   14668 provision.go:84] configureAuth start
	I0528 13:24:13.368248   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:15.495534   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:15.505532   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:15.505884   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:18.026169   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:18.026169   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:18.026388   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:20.149949   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:20.150217   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:20.150217   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:22.731823   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:22.731823   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:22.733923   14668 provision.go:143] copyHostCerts
	I0528 13:24:22.734318   14668 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 13:24:22.735551   14668 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 13:24:22.736755   14668 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 13:24:22.737346   14668 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-854100 san=[127.0.0.1 172.18.209.103 addons-854100 localhost minikube]
	I0528 13:24:22.937273   14668 provision.go:177] copyRemoteCerts
	I0528 13:24:22.952147   14668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 13:24:22.952147   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:25.064136   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:25.074930   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:25.075220   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:27.582359   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:27.582359   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:27.582359   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:24:27.687359   14668 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7352091s)
	I0528 13:24:27.688290   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 13:24:27.729804   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 13:24:27.780362   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 13:24:27.824899   14668 provision.go:87] duration metric: took 14.4566451s to configureAuth
	I0528 13:24:27.824899   14668 buildroot.go:189] setting minikube options for container-runtime
	I0528 13:24:27.825785   14668 config.go:182] Loaded profile config "addons-854100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:24:27.825862   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:29.904623   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:29.904623   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:29.904623   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:32.481135   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:32.481212   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:32.487554   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:32.487554   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:32.487554   14668 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 13:24:32.622824   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 13:24:32.622824   14668 buildroot.go:70] root file system type: tmpfs
	I0528 13:24:32.623356   14668 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 13:24:32.623488   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:34.918499   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:34.918499   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:34.918499   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:37.383956   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:37.383956   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:37.389863   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:37.400412   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:37.400412   14668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 13:24:37.554601   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 13:24:37.555133   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:39.620482   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:39.620482   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:39.630471   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:42.158971   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:42.159038   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:42.164755   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:42.165274   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:42.165274   14668 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 13:24:44.276208   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 13:24:44.276208   14668 machine.go:97] duration metric: took 44.9382523s to provisionDockerMachine
	I0528 13:24:44.276208   14668 client.go:171] duration metric: took 1m53.8522619s to LocalClient.Create
	I0528 13:24:44.276208   14668 start.go:167] duration metric: took 1m53.8522619s to libmachine.API.Create "addons-854100"
	I0528 13:24:44.276208   14668 start.go:293] postStartSetup for "addons-854100" (driver="hyperv")
	I0528 13:24:44.276208   14668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 13:24:44.289577   14668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 13:24:44.289577   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:46.401868   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:46.412890   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:46.413071   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:48.930351   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:48.930498   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:48.930498   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:24:49.038184   14668 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7481991s)
	I0528 13:24:49.051912   14668 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 13:24:49.064094   14668 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 13:24:49.064269   14668 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 13:24:49.064984   14668 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 13:24:49.065325   14668 start.go:296] duration metric: took 4.7891146s for postStartSetup
	I0528 13:24:49.070160   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:51.103270   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:51.103270   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:51.114962   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:53.568316   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:53.568601   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:53.568829   14668 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\config.json ...
	I0528 13:24:53.571966   14668 start.go:128] duration metric: took 2m3.1482511s to createHost
	I0528 13:24:53.571966   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:24:55.652782   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:24:55.652782   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:55.664504   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:24:58.120733   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:24:58.120851   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:24:58.126876   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:24:58.127445   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:24:58.127578   14668 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 13:24:58.259927   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716927898.254048082
	
	I0528 13:24:58.260006   14668 fix.go:216] guest clock: 1716927898.254048082
	I0528 13:24:58.260006   14668 fix.go:229] Guest: 2024-05-28 13:24:58.254048082 -0700 PDT Remote: 2024-05-28 13:24:53.5719665 -0700 PDT m=+128.561459601 (delta=4.682081582s)
	I0528 13:24:58.260006   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:25:00.232310   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:25:00.232310   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:00.232310   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:25:02.709209   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:25:02.721370   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:02.727171   14668 main.go:141] libmachine: Using SSH client type: native
	I0528 13:25:02.728132   14668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.209.103 22 <nil> <nil>}
	I0528 13:25:02.728132   14668 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716927898
	I0528 13:25:02.875536   14668 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 20:24:58 UTC 2024
	
	I0528 13:25:02.875536   14668 fix.go:236] clock set: Tue May 28 20:24:58 UTC 2024
	 (err=<nil>)
	I0528 13:25:02.875536   14668 start.go:83] releasing machines lock for "addons-854100", held for 2m12.4531688s
	I0528 13:25:02.876167   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:25:05.009385   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:25:05.009466   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:05.009466   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:25:07.470545   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:25:07.470623   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:07.474778   14668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 13:25:07.474971   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:25:07.486709   14668 ssh_runner.go:195] Run: cat /version.json
	I0528 13:25:07.486709   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:25:09.663837   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:25:09.663837   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:09.663837   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:25:09.663837   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:25:09.664122   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:09.664236   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:25:12.294768   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:25:12.294768   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:12.306504   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:25:12.329477   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:25:12.329513   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:25:12.329513   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:25:12.407865   14668 ssh_runner.go:235] Completed: cat /version.json: (4.9211538s)
	I0528 13:25:12.420147   14668 ssh_runner.go:195] Run: systemctl --version
	I0528 13:25:12.529255   14668 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0544738s)
	I0528 13:25:12.543026   14668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 13:25:12.552337   14668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 13:25:12.564980   14668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 13:25:12.593052   14668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 13:25:12.593159   14668 start.go:494] detecting cgroup driver to use...
	I0528 13:25:12.593452   14668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 13:25:12.641828   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 13:25:12.674138   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 13:25:12.693514   14668 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 13:25:12.705504   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 13:25:12.736262   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 13:25:12.767075   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 13:25:12.796619   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 13:25:12.826592   14668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 13:25:12.855686   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 13:25:12.886427   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 13:25:12.916076   14668 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 13:25:12.946639   14668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 13:25:12.974881   14668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 13:25:13.004292   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:13.196918   14668 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 13:25:13.232434   14668 start.go:494] detecting cgroup driver to use...
	I0528 13:25:13.246151   14668 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 13:25:13.283800   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 13:25:13.314901   14668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 13:25:13.364284   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 13:25:13.403375   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 13:25:13.441040   14668 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 13:25:13.506231   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 13:25:13.531644   14668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 13:25:13.578353   14668 ssh_runner.go:195] Run: which cri-dockerd
	I0528 13:25:13.595755   14668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 13:25:13.612588   14668 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 13:25:13.653399   14668 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 13:25:13.858117   14668 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 13:25:14.040248   14668 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 13:25:14.040572   14668 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 13:25:14.091560   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:14.309155   14668 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 13:25:16.828075   14668 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5189189s)
	I0528 13:25:16.837688   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 13:25:16.876031   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 13:25:16.911985   14668 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 13:25:17.116826   14668 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 13:25:17.315672   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:17.522850   14668 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 13:25:17.564353   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 13:25:17.601192   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:17.795631   14668 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 13:25:17.910594   14668 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 13:25:17.924742   14668 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 13:25:17.933243   14668 start.go:562] Will wait 60s for crictl version
	I0528 13:25:17.945440   14668 ssh_runner.go:195] Run: which crictl
	I0528 13:25:17.963882   14668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 13:25:18.019631   14668 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 13:25:18.030450   14668 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 13:25:18.074712   14668 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 13:25:18.119427   14668 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 13:25:18.119427   14668 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 13:25:18.123757   14668 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 13:25:18.123757   14668 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 13:25:18.123757   14668 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 13:25:18.123757   14668 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 13:25:18.125633   14668 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 13:25:18.125633   14668 ip.go:210] interface addr: 172.18.208.1/20
	I0528 13:25:18.139699   14668 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 13:25:18.146741   14668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 13:25:18.169419   14668 kubeadm.go:877] updating cluster {Name:addons-854100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:addons-854100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.209.103 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 13:25:18.169419   14668 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:25:18.181259   14668 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 13:25:18.203292   14668 docker.go:685] Got preloaded images: 
	I0528 13:25:18.203325   14668 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0528 13:25:18.215678   14668 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 13:25:18.251002   14668 ssh_runner.go:195] Run: which lz4
	I0528 13:25:18.277817   14668 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 13:25:18.284880   14668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 13:25:18.284880   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0528 13:25:20.359342   14668 docker.go:649] duration metric: took 2.0930476s to copy over tarball
	I0528 13:25:20.373845   14668 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 13:25:25.511287   14668 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1369152s)
	I0528 13:25:25.511287   14668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 13:25:25.573324   14668 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 13:25:25.593436   14668 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0528 13:25:25.636591   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:25.834120   14668 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 13:25:31.529254   14668 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6951306s)
	I0528 13:25:31.541625   14668 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 13:25:31.566919   14668 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 13:25:31.567965   14668 cache_images.go:84] Images are preloaded, skipping loading
	I0528 13:25:31.567995   14668 kubeadm.go:928] updating node { 172.18.209.103 8443 v1.30.1 docker true true} ...
	I0528 13:25:31.567995   14668 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-854100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.209.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-854100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 13:25:31.576963   14668 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 13:25:31.612039   14668 cni.go:84] Creating CNI manager for ""
	I0528 13:25:31.612039   14668 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:25:31.612039   14668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 13:25:31.612039   14668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.209.103 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-854100 NodeName:addons-854100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.209.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.209.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 13:25:31.612039   14668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.209.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-854100"
	  kubeletExtraArgs:
	    node-ip: 172.18.209.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.209.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 13:25:31.626184   14668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 13:25:31.646412   14668 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 13:25:31.659246   14668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 13:25:31.676127   14668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0528 13:25:31.707860   14668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 13:25:31.738355   14668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0528 13:25:31.782243   14668 ssh_runner.go:195] Run: grep 172.18.209.103	control-plane.minikube.internal$ /etc/hosts
	I0528 13:25:31.787338   14668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.209.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 13:25:31.821124   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:25:32.012327   14668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 13:25:32.042180   14668 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100 for IP: 172.18.209.103
	I0528 13:25:32.042180   14668 certs.go:194] generating shared ca certs ...
	I0528 13:25:32.042180   14668 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.042771   14668 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 13:25:32.422435   14668 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt ...
	I0528 13:25:32.422435   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt: {Name:mk7a559291b59fd1cacf23acd98c76aadd417440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.424883   14668 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key ...
	I0528 13:25:32.424883   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key: {Name:mkbedd9bb05780b48b3744f1500f6ab6cea55798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.425795   14668 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 13:25:32.631570   14668 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0528 13:25:32.631570   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd3d06d8ce13b6ea5bb86cd17b70e85416bbf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.633773   14668 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key ...
	I0528 13:25:32.633773   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkf3a613f937d3e2839d9a0e4a8e5134d5e75dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.634130   14668 certs.go:256] generating profile certs ...
	I0528 13:25:32.635173   14668 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.key
	I0528 13:25:32.635754   14668 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt with IP's: []
	I0528 13:25:32.943914   14668 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt ...
	I0528 13:25:32.943914   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: {Name:mk982d968d6bdc8c47f6e9dd57b10897557a6e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.946026   14668 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.key ...
	I0528 13:25:32.946026   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.key: {Name:mk0dc751dedf28eae3fc9d25c66695c556d58f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:32.946384   14668 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key.33a5d016
	I0528 13:25:32.946384   14668 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt.33a5d016 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.209.103]
	I0528 13:25:33.229584   14668 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt.33a5d016 ...
	I0528 13:25:33.229584   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt.33a5d016: {Name:mkb6b4e13fdf4b257c2cd874510a008cf3e064e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:33.231131   14668 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key.33a5d016 ...
	I0528 13:25:33.231131   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key.33a5d016: {Name:mk35833b49e7d40c4d427a552b0070441cfb456b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:33.231689   14668 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt.33a5d016 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt
	I0528 13:25:33.242943   14668 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key.33a5d016 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key
	I0528 13:25:33.243885   14668 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.key
	I0528 13:25:33.243885   14668 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.crt with IP's: []
	I0528 13:25:33.383205   14668 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.crt ...
	I0528 13:25:33.383205   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.crt: {Name:mk0ad4f275892a41d7ec120989520de19d5b9fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:33.385339   14668 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.key ...
	I0528 13:25:33.385339   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.key: {Name:mkeb4b13b95b4c3c980999efbe6f2e0f697ec5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:25:33.395792   14668 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 13:25:33.403105   14668 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 13:25:33.411333   14668 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 13:25:33.418875   14668 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 13:25:33.427527   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 13:25:33.477570   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 13:25:33.524500   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 13:25:33.565749   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 13:25:33.612107   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 13:25:33.650600   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 13:25:33.687679   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 13:25:33.734835   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 13:25:33.779320   14668 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 13:25:33.824706   14668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 13:25:33.867655   14668 ssh_runner.go:195] Run: openssl version
	I0528 13:25:33.891213   14668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 13:25:33.923862   14668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:25:33.930019   14668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:25:33.942746   14668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:25:33.965663   14668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 13:25:33.996689   14668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 13:25:34.003573   14668 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 13:25:34.003994   14668 kubeadm.go:391] StartCluster: {Name:addons-854100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:addons-854100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.209.103 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:25:34.015137   14668 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 13:25:34.056044   14668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 13:25:34.089190   14668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 13:25:34.120083   14668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 13:25:34.138731   14668 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 13:25:34.138780   14668 kubeadm.go:156] found existing configuration files:
	
	I0528 13:25:34.150654   14668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 13:25:34.167890   14668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 13:25:34.180389   14668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 13:25:34.211836   14668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 13:25:34.228534   14668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 13:25:34.240457   14668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 13:25:34.271285   14668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 13:25:34.288725   14668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 13:25:34.300477   14668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 13:25:34.332965   14668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 13:25:34.348140   14668 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 13:25:34.361004   14668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 13:25:34.375423   14668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 13:25:34.613985   14668 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 13:25:48.520814   14668 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 13:25:48.521023   14668 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 13:25:48.521283   14668 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 13:25:48.521560   14668 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 13:25:48.521809   14668 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 13:25:48.522059   14668 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 13:25:48.524986   14668 out.go:204]   - Generating certificates and keys ...
	I0528 13:25:48.524986   14668 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 13:25:48.525560   14668 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 13:25:48.525766   14668 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 13:25:48.525985   14668 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 13:25:48.526048   14668 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 13:25:48.526048   14668 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 13:25:48.526048   14668 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 13:25:48.526634   14668 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-854100 localhost] and IPs [172.18.209.103 127.0.0.1 ::1]
	I0528 13:25:48.526634   14668 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 13:25:48.526634   14668 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-854100 localhost] and IPs [172.18.209.103 127.0.0.1 ::1]
	I0528 13:25:48.526634   14668 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 13:25:48.527360   14668 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 13:25:48.527360   14668 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 13:25:48.527360   14668 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 13:25:48.527360   14668 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 13:25:48.527360   14668 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 13:25:48.528080   14668 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 13:25:48.528080   14668 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 13:25:48.528080   14668 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 13:25:48.528080   14668 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 13:25:48.528823   14668 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 13:25:48.530709   14668 out.go:204]   - Booting up control plane ...
	I0528 13:25:48.530709   14668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 13:25:48.531707   14668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 13:25:48.531707   14668 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 13:25:48.531707   14668 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 13:25:48.531707   14668 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 13:25:48.531707   14668 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 13:25:48.532708   14668 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 13:25:48.532708   14668 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 13:25:48.532708   14668 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001612615s
	I0528 13:25:48.532708   14668 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 13:25:48.532708   14668 kubeadm.go:309] [api-check] The API server is healthy after 7.004155434s
	I0528 13:25:48.532708   14668 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 13:25:48.532708   14668 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 13:25:48.533698   14668 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 13:25:48.533698   14668 kubeadm.go:309] [mark-control-plane] Marking the node addons-854100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 13:25:48.533698   14668 kubeadm.go:309] [bootstrap-token] Using token: zipcjs.frctf26tk5s1uqxx
	I0528 13:25:48.535744   14668 out.go:204]   - Configuring RBAC rules ...
	I0528 13:25:48.535744   14668 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 13:25:48.536760   14668 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 13:25:48.536760   14668 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 13:25:48.536760   14668 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 13:25:48.536760   14668 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 13:25:48.537695   14668 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 13:25:48.537695   14668 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 13:25:48.537695   14668 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 13:25:48.537695   14668 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 13:25:48.537695   14668 kubeadm.go:309] 
	I0528 13:25:48.537695   14668 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 13:25:48.537695   14668 kubeadm.go:309] 
	I0528 13:25:48.538698   14668 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 13:25:48.538698   14668 kubeadm.go:309] 
	I0528 13:25:48.538698   14668 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 13:25:48.538698   14668 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 13:25:48.538698   14668 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 13:25:48.538698   14668 kubeadm.go:309] 
	I0528 13:25:48.538698   14668 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 13:25:48.538698   14668 kubeadm.go:309] 
	I0528 13:25:48.538698   14668 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 13:25:48.538698   14668 kubeadm.go:309] 
	I0528 13:25:48.539749   14668 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 13:25:48.539749   14668 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 13:25:48.539749   14668 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 13:25:48.539749   14668 kubeadm.go:309] 
	I0528 13:25:48.539749   14668 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 13:25:48.539749   14668 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 13:25:48.539749   14668 kubeadm.go:309] 
	I0528 13:25:48.539749   14668 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zipcjs.frctf26tk5s1uqxx \
	I0528 13:25:48.540692   14668 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea \
	I0528 13:25:48.540692   14668 kubeadm.go:309] 	--control-plane 
	I0528 13:25:48.540692   14668 kubeadm.go:309] 
	I0528 13:25:48.540692   14668 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 13:25:48.540692   14668 kubeadm.go:309] 
	I0528 13:25:48.540692   14668 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zipcjs.frctf26tk5s1uqxx \
	I0528 13:25:48.540692   14668 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 13:25:48.540692   14668 cni.go:84] Creating CNI manager for ""
	I0528 13:25:48.540692   14668 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:25:48.545697   14668 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 13:25:48.559735   14668 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 13:25:48.580097   14668 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 13:25:48.613666   14668 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 13:25:48.627253   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:48.628262   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-854100 minikube.k8s.io/updated_at=2024_05_28T13_25_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-854100 minikube.k8s.io/primary=true
	I0528 13:25:48.635548   14668 ops.go:34] apiserver oom_adj: -16
	I0528 13:25:48.792168   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:49.294444   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:49.796861   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:50.296994   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:50.798776   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:51.302553   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:51.800657   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:52.303728   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:52.804332   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:53.308868   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:53.793844   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:54.299948   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:54.804091   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:55.297181   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:55.796064   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:56.300759   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:56.792829   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:57.307203   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:57.793634   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:58.297778   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:58.801760   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:59.291540   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:25:59.797662   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:26:00.300094   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:26:00.791442   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:26:01.296610   14668 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 13:26:01.424579   14668 kubeadm.go:1107] duration metric: took 12.8099307s to wait for elevateKubeSystemPrivileges
	W0528 13:26:01.424579   14668 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 13:26:01.424579   14668 kubeadm.go:393] duration metric: took 27.4205717s to StartCluster
	I0528 13:26:01.424579   14668 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:26:01.424579   14668 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:26:01.425581   14668 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:26:01.427585   14668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 13:26:01.427585   14668 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.209.103 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 13:26:01.427585   14668 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 13:26:01.432580   14668 out.go:177] * Verifying Kubernetes components...
	I0528 13:26:01.427585   14668 addons.go:69] Setting yakd=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting metrics-server=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting registry=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting inspektor-gadget=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting storage-provisioner=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting cloud-spanner=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting volcano=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting helm-tiller=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting ingress=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting volumesnapshots=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting ingress-dns=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting default-storageclass=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 addons.go:69] Setting gcp-auth=true in profile "addons-854100"
	I0528 13:26:01.427585   14668 config.go:182] Loaded profile config "addons-854100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon registry=true in "addons-854100"
	I0528 13:26:01.433586   14668 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-854100"
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon volcano=true in "addons-854100"
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon ingress=true in "addons-854100"
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-854100"
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon storage-provisioner=true in "addons-854100"
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon volumesnapshots=true in "addons-854100"
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 mustload.go:65] Loading cluster: addons-854100
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon helm-tiller=true in "addons-854100"
	I0528 13:26:01.434591   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.434591   14668 config.go:182] Loaded profile config "addons-854100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon yakd=true in "addons-854100"
	I0528 13:26:01.434591   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon inspektor-gadget=true in "addons-854100"
	I0528 13:26:01.434591   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.435589   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-854100"
	I0528 13:26:01.437589   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon cloud-spanner=true in "addons-854100"
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon ingress-dns=true in "addons-854100"
	I0528 13:26:01.433586   14668 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-854100"
	I0528 13:26:01.433586   14668 addons.go:234] Setting addon metrics-server=true in "addons-854100"
	I0528 13:26:01.437589   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.437589   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.438585   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:01.438585   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.439599   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.439599   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.439599   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.440610   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.440610   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.440610   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.442599   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.443640   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.443640   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.444639   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.444639   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.445612   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.445612   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.446612   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:01.467632   14668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:26:02.362939   14668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 13:26:02.539127   14668 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.071495s)
	I0528 13:26:02.580712   14668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 13:26:04.505646   14668 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.1427065s)
	I0528 13:26:04.505646   14668 start.go:946] {"host.minikube.internal": 172.18.208.1} host record injected into CoreDNS's ConfigMap
	I0528 13:26:04.511646   14668 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.930933s)
	I0528 13:26:04.514655   14668 node_ready.go:35] waiting up to 6m0s for node "addons-854100" to be "Ready" ...
	I0528 13:26:05.033821   14668 node_ready.go:49] node "addons-854100" has status "Ready":"True"
	I0528 13:26:05.034848   14668 node_ready.go:38] duration metric: took 519.1656ms for node "addons-854100" to be "Ready" ...
	I0528 13:26:05.034848   14668 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:26:05.131013   14668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:05.850949   14668 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-854100" context rescaled to 1 replicas
	I0528 13:26:07.320291   14668 pod_ready.go:102] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"False"
	I0528 13:26:08.139614   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.139614   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.166369   14668 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0528 13:26:08.187532   14668 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0528 13:26:08.202498   14668 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0528 13:26:08.201491   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.204804   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.213286   14668 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0528 13:26:08.217256   14668 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0528 13:26:08.217256   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0528 13:26:08.217256   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.218873   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.218873   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.224859   14668 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 13:26:08.228856   14668 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 13:26:08.228856   14668 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 13:26:08.228856   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.257731   14668 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0528 13:26:08.257731   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0528 13:26:08.257731   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.396670   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.396670   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.417670   14668 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 13:26:08.396670   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.439673   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.439673   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:08.441656   14668 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 13:26:08.441656   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 13:26:08.441656   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.661371   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.661371   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.664341   14668 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 13:26:08.673272   14668 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 13:26:08.677277   14668 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 13:26:08.677277   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 13:26:08.677277   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.714061   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.715054   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.723062   14668 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 13:26:08.729114   14668 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 13:26:08.729114   14668 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 13:26:08.729114   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.741080   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.741080   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.744136   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 13:26:08.749697   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 13:26:08.749697   14668 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 13:26:08.749697   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.748054   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.751321   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.758987   14668 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 13:26:08.766746   14668 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 13:26:08.766746   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 13:26:08.767355   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.865401   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.865401   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.869133   14668 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 13:26:08.871404   14668 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 13:26:08.871404   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 13:26:08.871404   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:08.984245   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:08.984245   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:08.998365   14668 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 13:26:08.986247   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:09.006370   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:09.013593   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:09.016604   14668 addons.go:234] Setting addon default-storageclass=true in "addons-854100"
	I0528 13:26:09.016604   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:09.017595   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:09.019298   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:09.019298   14668 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 13:26:09.023682   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 13:26:09.023682   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:09.028668   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 13:26:09.064697   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 13:26:09.086669   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 13:26:09.137626   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 13:26:09.239283   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 13:26:09.276142   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 13:26:09.279144   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0528 13:26:09.295147   14668 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 13:26:09.298145   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 13:26:09.298145   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 13:26:09.298145   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:09.449003   14668 pod_ready.go:102] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"False"
	I0528 13:26:09.602789   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:09.602789   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:09.606784   14668 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 13:26:09.613787   14668 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 13:26:09.616790   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:09.616790   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:09.628790   14668 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 13:26:09.625791   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:09.636758   14668 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 13:26:09.638793   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 13:26:09.638793   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:09.640511   14668 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 13:26:09.642571   14668 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 13:26:09.642571   14668 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 13:26:09.642571   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:09.645568   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:09.648575   14668 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-854100"
	I0528 13:26:09.648575   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:09.649571   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:11.595162   14668 pod_ready.go:102] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"False"
	I0528 13:26:13.983256   14668 pod_ready.go:102] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"False"
	I0528 13:26:14.621459   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:14.621459   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:14.621459   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:14.755441   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:14.755441   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:14.755441   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.168967   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.168967   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.168967   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.275144   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.275144   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.275144   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.378282   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.378282   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.378282   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.412722   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.412722   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.412722   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.512855   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.512855   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.512855   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.571754   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.571754   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.571754   14668 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 13:26:15.571754   14668 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 13:26:15.571754   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:15.948590   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.948590   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.948590   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.965931   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.965931   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.965931   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:15.995962   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:15.995962   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:15.995962   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:16.059630   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:16.059630   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:16.059630   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:16.065849   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:16.065849   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:16.092639   14668 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 13:26:16.134204   14668 out.go:177]   - Using image docker.io/busybox:stable
	I0528 13:26:16.166700   14668 pod_ready.go:102] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"False"
	I0528 13:26:16.166700   14668 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 13:26:16.166700   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 13:26:16.166700   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:16.402184   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:16.402184   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:16.402184   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:16.540934   14668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 13:26:16.540934   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:16.663003   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:16.663003   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:16.663003   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:18.111633   14668 pod_ready.go:92] pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.111633   14668 pod_ready.go:81] duration metric: took 12.8582243s for pod "coredns-7db6d8ff4d-5zj27" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.111633   14668 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d92fw" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.368547   14668 pod_ready.go:92] pod "coredns-7db6d8ff4d-d92fw" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.368547   14668 pod_ready.go:81] duration metric: took 256.9135ms for pod "coredns-7db6d8ff4d-d92fw" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.368547   14668 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.410552   14668 pod_ready.go:92] pod "etcd-addons-854100" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.410552   14668 pod_ready.go:81] duration metric: took 42.0046ms for pod "etcd-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.410552   14668 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.728552   14668 pod_ready.go:92] pod "kube-apiserver-addons-854100" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.728552   14668 pod_ready.go:81] duration metric: took 317.9997ms for pod "kube-apiserver-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.728552   14668 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.755542   14668 pod_ready.go:92] pod "kube-controller-manager-addons-854100" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.755542   14668 pod_ready.go:81] duration metric: took 26.9907ms for pod "kube-controller-manager-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.755542   14668 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jkg6r" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.784552   14668 pod_ready.go:92] pod "kube-proxy-jkg6r" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.784552   14668 pod_ready.go:81] duration metric: took 29.0098ms for pod "kube-proxy-jkg6r" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.784552   14668 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.801658   14668 pod_ready.go:92] pod "kube-scheduler-addons-854100" in "kube-system" namespace has status "Ready":"True"
	I0528 13:26:18.801658   14668 pod_ready.go:81] duration metric: took 16.1191ms for pod "kube-scheduler-addons-854100" in "kube-system" namespace to be "Ready" ...
	I0528 13:26:18.801658   14668 pod_ready.go:38] duration metric: took 13.7668032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:26:18.801658   14668 api_server.go:52] waiting for apiserver process to appear ...
	I0528 13:26:18.822993   14668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:26:18.964058   14668 api_server.go:72] duration metric: took 17.5364639s to wait for apiserver process to appear ...
	I0528 13:26:18.964058   14668 api_server.go:88] waiting for apiserver healthz status ...
	I0528 13:26:18.965051   14668 api_server.go:253] Checking apiserver healthz at https://172.18.209.103:8443/healthz ...
	I0528 13:26:19.007000   14668 api_server.go:279] https://172.18.209.103:8443/healthz returned 200:
	ok
	I0528 13:26:19.020155   14668 api_server.go:141] control plane version: v1.30.1
	I0528 13:26:19.020155   14668 api_server.go:131] duration metric: took 56.0976ms to wait for apiserver health ...
	I0528 13:26:19.020155   14668 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 13:26:19.041344   14668 system_pods.go:59] 7 kube-system pods found
	I0528 13:26:19.042293   14668 system_pods.go:61] "coredns-7db6d8ff4d-5zj27" [2d780206-a889-4ae7-b1f8-c37d2f7c36bd] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "coredns-7db6d8ff4d-d92fw" [47e15a9c-a859-4ce2-97d5-5dcfddd5f7b7] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "etcd-addons-854100" [daf00951-e8f2-4ba2-814c-070bdf6fe439] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "kube-apiserver-addons-854100" [e4b3f521-555a-434d-a90e-4eba46c0a56b] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "kube-controller-manager-addons-854100" [7df0a975-32af-41d2-8ad4-592317c6a3d2] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "kube-proxy-jkg6r" [f989fc97-6445-4bd6-8332-11403d397310] Running
	I0528 13:26:19.042293   14668 system_pods.go:61] "kube-scheduler-addons-854100" [0fdaaeeb-acd0-4153-802b-6d989ae79354] Running
	I0528 13:26:19.042293   14668 system_pods.go:74] duration metric: took 22.138ms to wait for pod list to return data ...
	I0528 13:26:19.042293   14668 default_sa.go:34] waiting for default service account to be created ...
	I0528 13:26:19.095956   14668 default_sa.go:45] found service account: "default"
	I0528 13:26:19.095956   14668 default_sa.go:55] duration metric: took 53.6627ms for default service account to be created ...
	I0528 13:26:19.095956   14668 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 13:26:19.221567   14668 system_pods.go:86] 7 kube-system pods found
	I0528 13:26:19.221567   14668 system_pods.go:89] "coredns-7db6d8ff4d-5zj27" [2d780206-a889-4ae7-b1f8-c37d2f7c36bd] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "coredns-7db6d8ff4d-d92fw" [47e15a9c-a859-4ce2-97d5-5dcfddd5f7b7] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "etcd-addons-854100" [daf00951-e8f2-4ba2-814c-070bdf6fe439] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "kube-apiserver-addons-854100" [e4b3f521-555a-434d-a90e-4eba46c0a56b] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "kube-controller-manager-addons-854100" [7df0a975-32af-41d2-8ad4-592317c6a3d2] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "kube-proxy-jkg6r" [f989fc97-6445-4bd6-8332-11403d397310] Running
	I0528 13:26:19.221567   14668 system_pods.go:89] "kube-scheduler-addons-854100" [0fdaaeeb-acd0-4153-802b-6d989ae79354] Running
	I0528 13:26:19.221567   14668 system_pods.go:126] duration metric: took 125.6108ms to wait for k8s-apps to be running ...
	I0528 13:26:19.221567   14668 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 13:26:19.243153   14668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 13:26:19.357532   14668 system_svc.go:56] duration metric: took 135.9646ms WaitForService to wait for kubelet
	I0528 13:26:19.357532   14668 kubeadm.go:576] duration metric: took 17.9299376s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 13:26:19.357532   14668 node_conditions.go:102] verifying NodePressure condition ...
	I0528 13:26:19.400760   14668 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 13:26:19.400760   14668 node_conditions.go:123] node cpu capacity is 2
	I0528 13:26:19.400760   14668 node_conditions.go:105] duration metric: took 43.2289ms to run NodePressure ...
	I0528 13:26:19.401776   14668 start.go:240] waiting for startup goroutines ...
	I0528 13:26:21.755169   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:21.755169   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:21.755169   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:22.673579   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:22.673579   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:22.673579   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:22.784096   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:22.784096   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:22.784096   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:22.872662   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:22.872662   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:22.873009   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.034606   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.034685   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.034957   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.117618   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.117618   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.118230   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.119722   14668 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0528 13:26:23.119817   14668 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0528 13:26:23.170963   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.171147   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.171147   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.214358   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.214569   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.214776   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.292587   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.292587   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.292587   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.316430   14668 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 13:26:23.316488   14668 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 13:26:23.375868   14668 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 13:26:23.375868   14668 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0528 13:26:23.383558   14668 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 13:26:23.383558   14668 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 13:26:23.400935   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:23.400935   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.401202   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:23.449926   14668 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 13:26:23.449926   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 13:26:23.463940   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:23.463940   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:23.463940   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:23.519853   14668 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 13:26:23.519909   14668 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 13:26:23.559269   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 13:26:23.559269   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 13:26:23.572262   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 13:26:23.613264   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 13:26:23.679136   14668 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 13:26:23.679190   14668 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 13:26:23.708462   14668 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 13:26:23.708462   14668 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 13:26:23.725891   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 13:26:23.725958   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 13:26:23.788455   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 13:26:23.834836   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 13:26:23.840838   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 13:26:23.939555   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 13:26:23.939677   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 13:26:23.946564   14668 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 13:26:23.946669   14668 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 13:26:23.947038   14668 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 13:26:23.947038   14668 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 13:26:24.078567   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0528 13:26:24.084573   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:24.084573   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:24.084573   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:24.146101   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:24.146101   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:24.146313   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:24.167994   14668 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 13:26:24.167994   14668 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 13:26:24.235840   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:24.235840   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:24.235840   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:24.252771   14668 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 13:26:24.252935   14668 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 13:26:24.301359   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:24.301728   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:24.302008   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:24.448299   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 13:26:24.448434   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 13:26:24.464271   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:24.464604   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:24.464668   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:24.536428   14668 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 13:26:24.536428   14668 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 13:26:24.562461   14668 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 13:26:24.562461   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 13:26:24.764320   14668 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 13:26:24.764451   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 13:26:24.764451   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 13:26:24.764451   14668 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 13:26:24.778370   14668 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 13:26:24.778473   14668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 13:26:24.816033   14668 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 13:26:24.816148   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 13:26:24.869044   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 13:26:24.880041   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 13:26:25.001623   14668 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 13:26:25.001623   14668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 13:26:25.063601   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 13:26:25.063668   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 13:26:25.070865   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 13:26:25.114323   14668 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 13:26:25.114323   14668 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 13:26:25.115324   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 13:26:25.382219   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 13:26:25.382219   14668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 13:26:25.403179   14668 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 13:26:25.403179   14668 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 13:26:25.461998   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:25.461998   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:25.462289   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:25.558148   14668 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 13:26:25.558148   14668 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 13:26:25.776546   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 13:26:25.776640   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 13:26:25.869033   14668 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 13:26:25.869149   14668 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 13:26:25.910867   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 13:26:26.138871   14668 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 13:26:26.138871   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 13:26:26.226340   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 13:26:26.477525   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:26.477747   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:26.477873   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:26.565352   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 13:26:26.565352   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 13:26:26.678312   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 13:26:27.123116   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.5508524s)
	I0528 13:26:27.233686   14668 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 13:26:27.233782   14668 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 13:26:27.265470   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:27.265688   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:27.265880   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:27.862877   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.2495585s)
	I0528 13:26:27.862940   14668 addons.go:475] Verifying addon registry=true in "addons-854100"
	I0528 13:26:27.865552   14668 out.go:177] * Verifying registry addon...
	I0528 13:26:27.872542   14668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 13:26:27.889667   14668 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 13:26:27.889667   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:27.972815   14668 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 13:26:28.016427   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 13:26:28.107780   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.3193232s)
	I0528 13:26:28.397173   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:28.902529   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:29.009205   14668 addons.go:234] Setting addon gcp-auth=true in "addons-854100"
	I0528 13:26:29.009205   14668 host.go:66] Checking if "addons-854100" exists ...
	I0528 13:26:29.009791   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:29.202859   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 13:26:29.422001   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:29.967425   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:30.389563   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:31.179345   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:31.236355   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.4015152s)
	I0528 13:26:31.236355   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.3955132s)
	I0528 13:26:31.478368   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:31.479038   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:31.494052   14668 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 13:26:31.494052   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-854100 ).state
	I0528 13:26:31.497736   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:31.914260   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:32.411798   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:32.888972   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:33.389559   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:33.872802   14668 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:26:33.872802   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:33.872802   14668 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-854100 ).networkadapters[0]).ipaddresses[0]
	I0528 13:26:33.905035   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:34.381508   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:34.889890   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:35.537730   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:35.922013   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:36.378771   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:36.637462   14668 main.go:141] libmachine: [stdout =====>] : 172.18.209.103
	
	I0528 13:26:36.637558   14668 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:26:36.637608   14668 sshutil.go:53] new ssh client: &{IP:172.18.209.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\addons-854100\id_rsa Username:docker}
	I0528 13:26:36.890173   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:37.399460   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:37.934638   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:38.405483   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:38.919560   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:39.397749   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:39.932443   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:40.561521   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:41.019485   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:41.413087   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:41.493523   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (16.624408s)
	I0528 13:26:41.493523   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (17.4148847s)
	I0528 13:26:41.493658   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (16.6136077s)
	I0528 13:26:41.493756   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (16.4228811s)
	I0528 13:26:41.493658   14668 addons.go:475] Verifying addon ingress=true in "addons-854100"
	I0528 13:26:41.495978   14668 out.go:177] * Verifying ingress addon...
	I0528 13:26:41.494075   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (16.3787177s)
	I0528 13:26:41.494075   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.5831992s)
	I0528 13:26:41.494075   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (15.2677264s)
	I0528 13:26:41.494075   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (14.8157541s)
	W0528 13:26:41.500030   14668 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 13:26:41.500030   14668 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-854100 service yakd-dashboard -n yakd-dashboard
	
	I0528 13:26:41.500030   14668 retry.go:31] will retry after 134.542556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 13:26:41.500030   14668 addons.go:475] Verifying addon metrics-server=true in "addons-854100"
	I0528 13:26:41.502165   14668 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 13:26:41.534933   14668 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 13:26:41.534933   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:41.659954   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 13:26:41.903358   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:42.020196   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:42.358033   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (14.3415971s)
	I0528 13:26:42.358033   14668 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-854100"
	I0528 13:26:42.358033   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (13.1551657s)
	I0528 13:26:42.361514   14668 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 13:26:42.358033   14668 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.8639747s)
	I0528 13:26:42.366852   14668 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 13:26:42.366725   14668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 13:26:42.371696   14668 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 13:26:42.374010   14668 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 13:26:42.374047   14668 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 13:26:42.395564   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:42.417348   14668 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 13:26:42.417348   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:42.540495   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:42.644423   14668 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 13:26:42.644580   14668 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 13:26:42.743771   14668 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 13:26:42.743771   14668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 13:26:42.852856   14668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 13:26:42.881944   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:42.886506   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:43.018199   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:43.431181   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:43.448721   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:43.561984   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:43.890950   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:43.895330   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:44.015257   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:44.376265   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:44.386052   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:44.521702   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:44.900645   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:44.906162   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:44.928369   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.2684128s)
	I0528 13:26:45.009767   14668 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.1569091s)
	I0528 13:26:45.018794   14668 addons.go:475] Verifying addon gcp-auth=true in "addons-854100"
	I0528 13:26:45.021797   14668 out.go:177] * Verifying gcp-auth addon...
	I0528 13:26:45.025792   14668 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 13:26:45.039796   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:45.057174   14668 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 13:26:45.381196   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:45.385777   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:45.516770   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:45.883761   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:45.886780   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:46.011958   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:46.386615   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:46.389767   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:46.521713   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:46.890759   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:46.892512   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:47.013358   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:47.381084   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:47.382086   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:47.528693   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:47.889053   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:47.889778   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:48.015240   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:48.382466   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:48.387805   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:48.523886   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:48.887974   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:48.887974   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:49.013671   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:49.380967   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:49.380967   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:49.520937   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:49.888361   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:49.890846   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:50.012285   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:50.379653   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:50.385364   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:50.517405   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:50.886317   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:50.888137   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:51.023305   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:51.395063   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:51.396330   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:51.516817   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:51.890350   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:51.894332   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:52.421324   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:52.421324   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:52.425277   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:52.516971   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:52.885345   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:52.885345   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:53.023274   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:53.393260   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:53.395754   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:53.517056   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:53.881480   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:53.882975   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:54.021035   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:54.395651   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:54.397649   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:54.517490   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:56.562450   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:56.563454   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:56.564457   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:56.618071   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:56.618494   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:56.619246   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:56.887058   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:56.888039   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:57.030242   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:57.377545   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:57.383125   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:57.527155   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:57.886065   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:57.887052   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:58.013759   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:58.379522   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:58.383524   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:58.521559   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:58.892719   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:58.906454   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:59.014714   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:59.379433   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:59.383202   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:26:59.522874   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:26:59.891415   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:26:59.893327   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:00.018331   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:00.384701   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:00.385876   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:00.522699   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:00.918806   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:00.919875   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:01.173293   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:01.406964   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:01.408115   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:01.517067   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:01.897293   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:01.920557   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:02.056614   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:02.384041   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:02.387051   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:02.525792   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:02.888255   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:02.890319   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:03.025474   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:03.393508   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:03.398066   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:03.516925   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:03.883360   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:03.887887   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:04.023613   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:04.388841   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:04.389024   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:04.513649   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:04.880725   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:04.883553   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:05.022258   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:05.390506   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:05.390796   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:05.513809   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:05.879220   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:05.882808   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:06.018877   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:06.387356   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:06.394346   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:06.525965   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:06.894502   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:06.896019   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:07.014243   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:07.388312   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:07.388312   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:07.527713   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:07.890504   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:07.891836   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:08.015913   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:08.384861   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:08.387521   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:08.526988   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:08.889210   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:08.894125   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:09.015882   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:09.384036   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:09.388591   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:09.521621   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:09.888498   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:09.890122   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:10.012089   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:10.382543   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:10.382973   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:10.520304   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:10.884927   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:10.887521   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:11.023597   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:11.387530   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:11.391119   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:11.527918   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:11.893702   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:11.894660   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:12.017442   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:12.386191   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:12.386785   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:12.524024   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:12.892284   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:12.894073   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:13.015464   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:13.379359   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:13.383487   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:13.518598   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:13.889072   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:13.890380   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:14.026839   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:14.390749   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:14.391385   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:14.515187   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:14.880878   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:14.881070   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:15.027867   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:15.458913   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:15.459842   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:15.512108   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:15.878917   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:15.883574   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:16.020809   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:16.386344   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:16.391981   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:16.512706   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:16.891281   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:16.892397   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:17.015970   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:17.382735   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:17.383340   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:17.523303   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:17.890678   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:17.892804   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:18.016757   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:18.382828   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:18.384512   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:18.523165   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:18.890837   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:18.896745   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:19.027190   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:19.393800   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:19.395079   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:19.517383   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:19.881814   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:19.887516   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:20.019810   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:20.382522   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:20.384325   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:21.662964   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:21.674395   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:21.677055   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:21.679054   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:21.699514   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:21.703659   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:21.890091   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:21.896252   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:22.026890   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:22.394185   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:22.401914   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:22.554970   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:22.889851   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:22.896116   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:23.024610   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:23.379660   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:23.385396   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:23.519917   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:23.886616   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:23.889199   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:24.026301   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:24.556006   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:24.556006   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:24.558514   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:24.882991   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:24.886908   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:25.020068   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:25.391399   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:25.392315   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:25.522975   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:25.896923   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:25.897025   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:26.018579   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:26.379496   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:26.383189   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:26.518642   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:26.887324   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:26.887442   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:27.023602   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:27.391312   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:27.392347   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:27.517581   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:27.885620   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:27.887777   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:28.025096   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:28.393511   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:28.394494   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:28.520978   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:28.885078   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:28.890629   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:29.012257   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:29.379350   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:29.388901   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:29.516895   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:29.886637   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:29.886735   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:30.024402   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:30.391933   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:30.392550   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:30.516678   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:30.880469   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:30.881089   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:31.021548   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:31.388437   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:31.388437   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:31.526610   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:31.892680   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:31.896134   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:32.013453   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:32.385217   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:32.385217   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:32.527575   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:32.882691   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:32.887593   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:33.025591   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:33.378281   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:33.383222   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:33.519362   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:33.889248   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:33.889248   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:34.024925   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:34.389399   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:34.389969   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:34.516138   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:34.883561   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:34.884516   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:35.024139   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:35.386311   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:35.389396   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:35.525498   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:36.253620   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:36.254464   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:36.255820   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:37.017942   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:37.018983   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:37.019790   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:37.419599   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:37.419874   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:37.423256   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:37.437120   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:37.441757   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:37.513323   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:37.880538   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:37.884967   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 13:27:38.018014   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:38.392017   14668 kapi.go:107] duration metric: took 1m10.5194326s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 13:27:38.395006   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:38.522022   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:38.888763   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:39.012111   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:39.382772   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:39.530182   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:39.891544   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:40.018316   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:40.380825   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:40.522572   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:40.887041   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:41.012589   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:41.378551   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:41.520848   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:41.885732   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:42.010240   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:42.378014   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:42.517149   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:42.886375   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:43.024816   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:43.382532   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:43.520345   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:43.889377   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:44.014223   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:44.381377   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:44.522347   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:45.561768   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:45.563413   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:45.569719   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:45.570138   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:45.887052   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:46.024967   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:46.546473   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:46.550665   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:46.890326   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:47.021777   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:47.391479   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:47.533057   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:47.900133   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:48.017567   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:48.412962   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:48.513241   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:48.893178   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:49.011494   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:49.385560   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:49.525098   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:49.899929   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:50.018597   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:50.386288   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:50.513906   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:50.881873   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:51.021103   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:51.391046   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:51.516132   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:51.884329   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:52.025942   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:52.377154   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:52.518148   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:53.169954   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:53.170764   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:53.390305   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:53.512897   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:53.892119   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:54.013554   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:54.390557   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:54.517921   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:54.899037   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:55.022255   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:55.383395   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:55.520988   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:55.885327   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:56.025962   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:56.381359   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:56.517927   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:56.886218   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:57.015393   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:57.385303   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:57.521436   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:57.892953   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:58.013350   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:58.379645   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:58.522952   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:58.890022   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:59.012355   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:27:59.379970   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:27:59.521782   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:00.209570   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:00.209780   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:00.397304   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:00.515579   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:00.883657   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:01.023967   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:01.400174   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:01.515069   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:01.897022   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:02.028131   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:02.406209   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:02.520917   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:02.879807   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:03.028457   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:03.391940   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:03.516634   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:03.884561   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:04.024426   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:04.379102   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:04.517613   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:04.887988   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:05.012890   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:05.380832   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:05.520640   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:05.886907   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:06.024905   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:06.398386   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:06.517047   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:06.882803   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:07.047376   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:07.393403   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:07.517594   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:07.885661   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:08.025789   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:08.379349   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:08.522545   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:09.663424   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:09.665460   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:09.670469   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:09.673255   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:09.894552   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:10.016754   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:10.393297   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:10.518209   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:10.879430   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:11.019342   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:11.383594   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:11.525772   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:11.890186   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:12.012301   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:12.399125   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:12.516279   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:12.883362   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:13.026120   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:13.377999   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:13.519867   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:13.880629   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:14.024120   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:14.384351   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:14.523743   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:14.891223   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:15.018291   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:15.378888   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:15.520035   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:15.886987   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:16.013709   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:16.385055   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:16.523039   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:16.889782   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:17.016671   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:17.383417   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:17.524670   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:17.889950   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:18.015085   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:18.381293   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:18.521589   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:18.889279   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:19.303501   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:19.389563   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:19.757793   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:19.894682   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:20.017702   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:20.381443   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:20.521543   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:20.888660   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:21.025846   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:21.377624   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:21.518769   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:21.884564   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:22.029701   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:22.387470   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:22.511905   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:22.893335   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:23.023151   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:23.383133   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:23.522132   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:23.891136   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:24.014532   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:24.381066   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:24.520356   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:24.896413   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:25.026339   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:25.391796   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:25.516072   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:25.881193   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:26.019359   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:26.391291   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:26.526085   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:26.889943   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:27.014551   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:27.381403   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:27.519715   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:27.881854   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:28.020790   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:28.394575   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:28.515207   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:28.880953   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:29.022084   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:29.387292   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:29.511744   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:29.879380   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:30.019178   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:30.386212   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:30.525423   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:30.878977   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:31.017264   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:31.386074   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:31.980093   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:31.981726   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:32.026078   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:32.514584   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:32.519693   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:32.893429   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:33.023169   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:33.394610   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:33.526609   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:33.879028   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:34.017290   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:34.381737   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:34.522309   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:34.890382   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:35.016603   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:35.385604   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:35.525342   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:35.887237   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:36.025378   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:36.392721   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:36.515426   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:36.878636   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:37.044459   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:37.451097   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:37.523695   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:37.888849   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:38.012083   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:38.381801   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:38.518459   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:38.882103   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:39.023013   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:39.387503   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:39.513610   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:39.889596   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:40.014020   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:40.392966   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:40.529947   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:40.886479   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:41.013480   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:41.377056   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:41.517581   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:41.886253   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:42.011460   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:42.633281   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:42.635402   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:42.883180   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:43.023061   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:43.382820   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:43.521293   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:43.889551   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:44.011809   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:44.378911   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:44.517783   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:44.886122   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:45.015111   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:45.381487   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:45.517781   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:45.880078   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:46.019579   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:46.384278   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:46.526046   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:46.878411   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:47.028383   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:47.397642   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:47.513177   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:47.889693   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:48.021756   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:48.393019   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:48.516897   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:49.067598   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:49.067773   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:49.396853   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:49.512900   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:49.892157   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:50.016852   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:50.379642   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:50.522339   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:50.888915   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:51.013546   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:51.381901   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:51.523221   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:51.891663   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:52.016519   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:52.390772   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:52.523426   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:52.889906   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:53.028723   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:53.392564   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:53.510641   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:53.889855   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:54.024694   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:54.394377   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:54.513181   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:54.891527   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:55.014114   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:55.402052   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:55.521340   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:55.890823   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:56.013855   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:56.385306   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:56.525251   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:56.881480   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:57.020434   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:57.415805   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:57.536226   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:57.883850   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:58.021527   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:58.381143   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:58.524251   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:58.890607   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:59.011288   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:59.380938   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:28:59.518279   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:28:59.887349   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:00.025971   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:00.392307   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:00.515521   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:00.881755   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:01.023379   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:01.392004   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:01.710362   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:02.015319   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:02.841955   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:02.845378   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:02.852052   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:02.887571   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:03.019574   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:03.380794   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:03.520295   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:03.886986   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:04.026551   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:04.386829   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:04.513011   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:04.879646   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:05.018439   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:05.384025   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:05.522170   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:05.885673   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:06.024896   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:06.388513   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:06.514654   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:06.948930   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:07.020243   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:07.387068   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:07.528257   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:07.891151   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:08.017193   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:08.382867   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:08.520208   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:08.885669   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:09.024723   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:09.391956   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:09.514136   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:10.053192   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:10.060117   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:10.386123   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:10.526392   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:10.890777   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:11.015211   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:11.461617   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:11.514899   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:11.881796   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:12.022902   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:12.923555   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:12.927558   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:12.935569   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:13.027965   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:13.391801   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:13.517982   14668 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 13:29:13.884432   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:14.023563   14668 kapi.go:107] duration metric: took 2m32.5218975s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 13:29:14.386369   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:14.891791   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:15.381271   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:15.891978   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:16.381929   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 13:29:16.897795   14668 kapi.go:107] duration metric: took 2m34.5309665s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 13:29:29.043935   14668 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 13:29:29.043935   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:29.537027   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:30.044607   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:30.533369   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:31.041892   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:31.534399   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:32.039201   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:32.546993   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:33.043885   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:33.534270   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:34.043740   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:34.533209   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:35.036470   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:35.541616   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:36.045044   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:36.543493   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:37.042984   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:37.540325   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:38.043223   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:38.546944   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:39.039034   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:39.545803   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:40.046898   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:40.534870   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:41.046444   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:41.541697   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:42.044987   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:42.546477   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:43.045259   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:43.544765   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:44.047265   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:44.545589   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:45.047590   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:45.545197   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:46.046609   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:46.534934   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:47.036126   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:47.534706   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:48.034300   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:48.534573   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:49.043672   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:49.536410   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:50.046393   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:50.545792   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:51.032633   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:51.539070   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:52.041304   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:52.549892   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:53.046618   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:53.543242   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:54.044542   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:54.544507   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:55.047755   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:55.546690   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:56.044900   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:56.543176   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:57.042700   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:57.546452   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:58.044158   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:58.539510   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:59.041352   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:29:59.541228   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:00.045493   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:00.544034   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:01.042585   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:01.533974   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:02.042188   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:02.536326   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:03.054618   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:03.532657   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:04.038278   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:04.898086   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:05.048787   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:05.533370   14668 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 13:30:06.038875   14668 kapi.go:107] duration metric: took 3m21.0129423s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 13:30:06.041592   14668 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-854100 cluster.
	I0528 13:30:06.044203   14668 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 13:30:06.046820   14668 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 13:30:06.049727   14668 out.go:177] * Enabled addons: helm-tiller, ingress-dns, storage-provisioner, nvidia-device-plugin, volcano, cloud-spanner, inspektor-gadget, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0528 13:30:06.054607   14668 addons.go:510] duration metric: took 4m4.6268581s for enable addons: enabled=[helm-tiller ingress-dns storage-provisioner nvidia-device-plugin volcano cloud-spanner inspektor-gadget metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0528 13:30:06.054607   14668 start.go:245] waiting for cluster config update ...
	I0528 13:30:06.054607   14668 start.go:254] writing updated cluster config ...
	I0528 13:30:06.066645   14668 ssh_runner.go:195] Run: rm -f paused
	I0528 13:30:06.346133   14668 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 13:30:06.350261   14668 out.go:177] * Done! kubectl is now configured to use "addons-854100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 20:30:51 addons-854100 cri-dockerd[1231]: time="2024-05-28T20:30:51Z" level=error msg="error getting RW layer size for container ID '94b300d8e5267f1005ee06a61111706789df633f19e65b716f2ae2f36dc93d58': Error response from daemon: No such container: 94b300d8e5267f1005ee06a61111706789df633f19e65b716f2ae2f36dc93d58"
	May 28 20:30:51 addons-854100 cri-dockerd[1231]: time="2024-05-28T20:30:51Z" level=error msg="Set backoffDuration to : 1m0s for container ID '94b300d8e5267f1005ee06a61111706789df633f19e65b716f2ae2f36dc93d58'"
	May 28 20:30:52 addons-854100 dockerd[1332]: time="2024-05-28T20:30:52.895772880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:30:52 addons-854100 dockerd[1332]: time="2024-05-28T20:30:52.896008681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:30:52 addons-854100 dockerd[1332]: time="2024-05-28T20:30:52.896114181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:52 addons-854100 dockerd[1332]: time="2024-05-28T20:30:52.902651388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:53 addons-854100 cri-dockerd[1231]: time="2024-05-28T20:30:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 28 20:30:54 addons-854100 dockerd[1332]: time="2024-05-28T20:30:54.254827262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:30:54 addons-854100 dockerd[1332]: time="2024-05-28T20:30:54.255528665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:30:54 addons-854100 dockerd[1332]: time="2024-05-28T20:30:54.255633165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:54 addons-854100 dockerd[1332]: time="2024-05-28T20:30:54.255991067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:54 addons-854100 cri-dockerd[1231]: time="2024-05-28T20:30:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f0e6aedbbe78caf5b1cfdf544e68790efa50aa16b734697ede607afb5c5ac324/resolv.conf as [nameserver 10.96.0.10 search my-volcano.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 28 20:30:54 addons-854100 cri-dockerd[1231]: time="2024-05-28T20:30:54Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.217471165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.217852067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.218051868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.218330569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:30:55 addons-854100 dockerd[1326]: time="2024-05-28T20:30:55.342156371Z" level=info msg="ignoring event" container=f22e3bc3d1c2f60632167b88f524a1b5c1078fd6e314717e51868e9acfa6c2b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.342611473Z" level=info msg="shim disconnected" id=f22e3bc3d1c2f60632167b88f524a1b5c1078fd6e314717e51868e9acfa6c2b8 namespace=moby
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.342691974Z" level=warning msg="cleaning up after shim disconnected" id=f22e3bc3d1c2f60632167b88f524a1b5c1078fd6e314717e51868e9acfa6c2b8 namespace=moby
	May 28 20:30:55 addons-854100 dockerd[1332]: time="2024-05-28T20:30:55.342706074Z" level=info msg="cleaning up dead shim" namespace=moby
	May 28 20:30:57 addons-854100 dockerd[1326]: time="2024-05-28T20:30:57.688173590Z" level=info msg="ignoring event" container=df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 20:30:57 addons-854100 dockerd[1332]: time="2024-05-28T20:30:57.688449591Z" level=info msg="shim disconnected" id=df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc namespace=moby
	May 28 20:30:57 addons-854100 dockerd[1332]: time="2024-05-28T20:30:57.688505591Z" level=warning msg="cleaning up after shim disconnected" id=df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc namespace=moby
	May 28 20:30:57 addons-854100 dockerd[1332]: time="2024-05-28T20:30:57.688516391Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	f22e3bc3d1c2f       busybox@sha256:5eef5ed34e1e1ff0a4ae850395cbf665c4de6b4b83a32a0bc7bcb998e24e7bbb                                                              8 seconds ago        Exited              busybox                                  0                   df8e090e10c93       test-local-path
	dbcf1df5e86e5       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              14 seconds ago       Exited              helper-pod                               0                   1737df62ff60c       helper-pod-create-pvc-72f9c62a-a060-44b9-a171-fede391dec31
	a6b5cbeabdb08       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                                        27 seconds ago       Running             headlamp                                 0                   ed2e83243a547       headlamp-68456f997b-rdfqn
	2192c2b7c686e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 58 seconds ago       Running             gcp-auth                                 0                   a8852205184e8       gcp-auth-5db96cd9b4-gkrsm
	b3554f684328c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   cc929519921cb       csi-hostpathplugin-88zzp
	6543b2f7d31e2       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   e123edc4140b1       ingress-nginx-controller-768f948f8f-ffqvq
	8354e952c2e2d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          2 minutes ago        Running             csi-provisioner                          0                   cc929519921cb       csi-hostpathplugin-88zzp
	b3e3b6e507387       fd19c461b125e                                                                                                                                2 minutes ago        Running             admission                                0                   35a3baf3ae0d3       volcano-admission-7b497cf95b-wsm6p
	105bba1f047c1       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            2 minutes ago        Running             liveness-probe                           0                   cc929519921cb       csi-hostpathplugin-88zzp
	28baec20d1f95       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           2 minutes ago        Running             hostpath                                 0                   cc929519921cb       csi-hostpathplugin-88zzp
	4c2938909ac5b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   cc929519921cb       csi-hostpathplugin-88zzp
	7a9234ec621a7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   cc929519921cb       csi-hostpathplugin-88zzp
	6a9638cdfb464       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   09ac16b6d6e41       csi-hostpath-attacher-0
	9b61927d08675       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   cdc92f2079479       csi-hostpath-resizer-0
	ba052b03d54ec       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    2                   70e62e8b4d7bd       ingress-nginx-admission-patch-tzk22
	fb6a0bd396251       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                                               2 minutes ago        Running             volcano-scheduler                        0                   2abc2ef443899       volcano-scheduler-765f888978-jqhvw
	b97ebea81d88b       volcanosh/vc-webhook-manager@sha256:082b6a3b7b8b69d98541a8ea56958ef427fdba54ea555870799f8c9ec2754c1b                                         2 minutes ago        Exited              main                                     0                   498e5c9e3f5fa       volcano-admission-init-f6wbt
	10b6a8135cd7c       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                                      2 minutes ago        Running             volcano-controller                       0                   05e79a25f9cdf       volcano-controller-86c5446455-6642n
	efb8e5bd0e6e8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   1d7a7c1d60c7a       ingress-nginx-admission-create-2b5s4
	e4f0e1ad51bdc       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   5d972e278af76       local-path-provisioner-8d985888d-kzpw5
	f3f9d13a09a6d       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   edb87e1c8891e       snapshot-controller-745499f584-xl85g
	1bcac235c4525       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   b6675fc2885a7       snapshot-controller-745499f584-d84wf
	d518b53a8b764       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   aaa8c5aa4db66       yakd-dashboard-5ddbf7d777-9866b
	185b15a9feff1       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   1be47647c7f62       metrics-server-c59844bb4-rd2bf
	f9424854e1b9d       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   bb3ad8240a2ed       cloud-spanner-emulator-6fcd4f6f98-8dtmk
	b55eba1487eda       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   8b410d7f5d313       kube-ingress-dns-minikube
	205da7a8cb82a       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   135932477ed9a       nvidia-device-plugin-daemonset-52zc7
	a2c7177066b95       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   cb420fe35c5cf       storage-provisioner
	7880b363e556b       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   31e92a69967c9       coredns-7db6d8ff4d-d92fw
	446fa0be70d95       747097150317f                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   e49cfd0fae7f0       kube-proxy-jkg6r
	968ae0a40d421       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   81a0c18a1385f       etcd-addons-854100
	610fb9d63d914       a52dc94f0a912                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   45f0dd6ee1448       kube-scheduler-addons-854100
	999fad69c51ed       25a1387cdab82                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   adb37c1478444       kube-controller-manager-addons-854100
	f5751b38aacc2       91be940803172                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   040637ffe6834       kube-apiserver-addons-854100
	
	
	==> controller_ingress [6543b2f7d31e] <==
	W0528 20:29:13.196938       8 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0528 20:29:13.197499       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0528 20:29:13.205776       8 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.1" state="clean" commit="6911225c3f747e1cd9d109c305436d08b668f086" platform="linux/amd64"
	I0528 20:29:13.394539       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0528 20:29:13.418072       8 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0528 20:29:13.435315       8 nginx.go:264] "Starting NGINX Ingress controller"
	I0528 20:29:13.470740       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7169b079-a9b7-4e94-839d-885269833fc4", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0528 20:29:13.472124       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"f8886d15-fbce-4ce6-b385-b8d34c0208da", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0528 20:29:13.472645       8 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5c862306-b4f1-44dd-8ebc-033dbc6ce627", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0528 20:29:14.641260       8 nginx.go:307] "Starting NGINX process"
	I0528 20:29:14.641427       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0528 20:29:14.642637       8 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0528 20:29:14.643459       8 controller.go:190] "Configuration changes detected, backend reload required"
	I0528 20:29:14.664741       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0528 20:29:14.665565       8 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-ffqvq"
	I0528 20:29:14.677261       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-ffqvq" node="addons-854100"
	I0528 20:29:14.742803       8 controller.go:210] "Backend successfully reloaded"
	I0528 20:29:14.743008       8 controller.go:221] "Initial sync, sleeping for 1 second"
	I0528 20:29:14.743340       8 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-ffqvq", UID:"f734fcd0-05d1-4a66-ad1f-6f8d777c08f9", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [7880b363e556] <==
	[INFO] 10.244.0.7:53189 - 41426 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000357s
	[INFO] 10.244.0.7:38951 - 33264 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002678s
	[INFO] 10.244.0.7:38951 - 59127 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0003701s
	[INFO] 10.244.0.7:57868 - 9130 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001817s
	[INFO] 10.244.0.7:57868 - 26036 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002607s
	[INFO] 10.244.0.7:45513 - 14045 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001845s
	[INFO] 10.244.0.7:45513 - 17627 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0005574s
	[INFO] 10.244.0.7:34213 - 3021 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001327s
	[INFO] 10.244.0.7:34213 - 46286 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0002149s
	[INFO] 10.244.0.7:58130 - 5185 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000786s
	[INFO] 10.244.0.7:58130 - 29311 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001642s
	[INFO] 10.244.0.7:33354 - 16904 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001009s
	[INFO] 10.244.0.7:33354 - 21514 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000471s
	[INFO] 10.244.0.7:45140 - 38162 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002614s
	[INFO] 10.244.0.7:45140 - 53008 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000791s
	[INFO] 10.244.0.26:54535 - 54497 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0004396s
	[INFO] 10.244.0.26:56039 - 42330 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0004005s
	[INFO] 10.244.0.26:55043 - 34068 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0004504s
	[INFO] 10.244.0.26:47330 - 17241 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000071s
	[INFO] 10.244.0.26:50243 - 56256 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001125s
	[INFO] 10.244.0.26:57110 - 55312 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000279501s
	[INFO] 10.244.0.26:55549 - 54257 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001532002s
	[INFO] 10.244.0.26:35281 - 34839 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.001939502s
	[INFO] 10.244.0.28:35936 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000562202s
	[INFO] 10.244.0.28:54676 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000264201s
	
	
	==> describe nodes <==
	Name:               addons-854100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-854100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-854100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T13_25_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-854100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-854100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:25:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-854100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:30:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:30:56 +0000   Tue, 28 May 2024 20:25:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:30:56 +0000   Tue, 28 May 2024 20:25:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:30:56 +0000   Tue, 28 May 2024 20:25:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:30:56 +0000   Tue, 28 May 2024 20:25:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.209.103
	  Hostname:    addons-854100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a24f562008d48a7a18e87c8c25ea79b
	  System UUID:                aa2e381a-700b-e344-b59e-5caaf7c46eaa
	  Boot ID:                    94d9d815-2c66-4110-84eb-4c0fe7ace611
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-8dtmk      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-gkrsm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  headlamp                    headlamp-68456f997b-rdfqn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-ffqvq    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m26s
	  kube-system                 coredns-7db6d8ff4d-d92fw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 csi-hostpathplugin-88zzp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-addons-854100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-apiserver-addons-854100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-addons-854100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-jkg6r                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-addons-854100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 metrics-server-c59844bb4-rd2bf               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 nvidia-device-plugin-daemonset-52zc7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 snapshot-controller-745499f584-d84wf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 snapshot-controller-745499f584-xl85g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  local-path-storage          local-path-provisioner-8d985888d-kzpw5       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  my-volcano                  test-job-nginx-0                             1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  volcano-system              volcano-admission-7b497cf95b-wsm6p           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-controller-86c5446455-6642n          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  volcano-system              volcano-scheduler-765f888978-jqhvw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-9866b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1950m (97%!)(MISSING)  1 (50%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m24s)  kubelet          Node addons-854100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m24s)  kubelet          Node addons-854100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m24s)  kubelet          Node addons-854100 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s                  kubelet          Node addons-854100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s                  kubelet          Node addons-854100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s                  kubelet          Node addons-854100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m11s                  kubelet          Node addons-854100 status is now: NodeReady
	  Normal  RegisteredNode           5m3s                   node-controller  Node addons-854100 event: Registered Node addons-854100 in Controller
	
	
	==> dmesg <==
	[  +5.034300] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.054084] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.127847] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.028347] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.066939] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.457459] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.933218] kauditd_printk_skb: 37 callbacks suppressed
	[May28 20:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.694291] kauditd_printk_skb: 24 callbacks suppressed
	[May28 20:28] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.099161] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.822757] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.171609] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.426645] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.114318] kauditd_printk_skb: 34 callbacks suppressed
	[May28 20:29] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.930286] kauditd_printk_skb: 38 callbacks suppressed
	[ +23.581919] kauditd_printk_skb: 24 callbacks suppressed
	[May28 20:30] kauditd_printk_skb: 40 callbacks suppressed
	[ +11.061380] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.081413] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.877998] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.081645] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.443592] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.072066] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [968ae0a40d42] <==
	{"level":"warn","ts":"2024-05-28T20:30:36.20663Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:30:35.55505Z","time spent":"651.544589ms","remote":"127.0.0.1:49858","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1681 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-05-28T20:30:36.206628Z","caller":"traceutil/trace.go:171","msg":"trace[510577807] transaction","detail":"{read_only:false; response_revision:1706; number_of_response:1; }","duration":"653.5341ms","start":"2024-05-28T20:30:35.553081Z","end":"2024-05-28T20:30:36.206615Z","steps":["trace[510577807] 'process raft request'  (duration: 653.023597ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:36.206686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:30:35.553068Z","time spent":"653.5856ms","remote":"127.0.0.1:49732","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1684 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-05-28T20:30:36.206716Z","caller":"traceutil/trace.go:171","msg":"trace[562918225] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"352.429825ms","start":"2024-05-28T20:30:35.854275Z","end":"2024-05-28T20:30:36.206705Z","steps":["trace[562918225] 'process raft request'  (duration: 351.943323ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:36.207001Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:30:35.854254Z","time spent":"352.721426ms","remote":"127.0.0.1:49858","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-854100\" mod_revision:1661 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-854100\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-854100\" > >"}
	{"level":"info","ts":"2024-05-28T20:30:58.348095Z","caller":"traceutil/trace.go:171","msg":"trace[1776586542] transaction","detail":"{read_only:false; response_revision:1848; number_of_response:1; }","duration":"498.680708ms","start":"2024-05-28T20:30:57.849392Z","end":"2024-05-28T20:30:58.348073Z","steps":["trace[1776586542] 'process raft request'  (duration: 498.565707ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:58.34822Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:30:57.849379Z","time spent":"498.776108ms","remote":"127.0.0.1:49662","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/my-volcano/test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c\" mod_revision:1847 > success:<request_put:<key:\"/registry/events/my-volcano/test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c\" value_size:598 lease:6244398941925562300 >> failure:<request_range:<key:\"/registry/events/my-volcano/test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c\" > >"}
	{"level":"info","ts":"2024-05-28T20:30:58.348666Z","caller":"traceutil/trace.go:171","msg":"trace[1158245740] linearizableReadLoop","detail":"{readStateIndex:1941; appliedIndex:1941; }","duration":"375.331909ms","start":"2024-05-28T20:30:57.973325Z","end":"2024-05-28T20:30:58.348656Z","steps":["trace[1158245740] 'read index received'  (duration: 375.328609ms)","trace[1158245740] 'applied index is now lower than readState.Index'  (duration: 2.7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T20:30:58.349034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.703112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3866"}
	{"level":"info","ts":"2024-05-28T20:30:58.349069Z","caller":"traceutil/trace.go:171","msg":"trace[1994068146] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1848; }","duration":"375.739712ms","start":"2024-05-28T20:30:57.973319Z","end":"2024-05-28T20:30:58.349059Z","steps":["trace[1994068146] 'agreement among raft nodes before linearized reading'  (duration: 375.639411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:58.349105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:30:57.973273Z","time spent":"375.827212ms","remote":"127.0.0.1:49752","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3889,"request content":"key:\"/registry/pods/default/test-local-path\" "}
	{"level":"warn","ts":"2024-05-28T20:30:58.607852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.499534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:30:58.608104Z","caller":"traceutil/trace.go:171","msg":"trace[805330047] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1848; }","duration":"254.831937ms","start":"2024-05-28T20:30:58.353149Z","end":"2024-05-28T20:30:58.607981Z","steps":["trace[805330047] 'range keys from in-memory index tree'  (duration: 253.252027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:58.608679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.420532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3627"}
	{"level":"info","ts":"2024-05-28T20:30:58.608721Z","caller":"traceutil/trace.go:171","msg":"trace[40953702] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1848; }","duration":"204.517134ms","start":"2024-05-28T20:30:58.404192Z","end":"2024-05-28T20:30:58.608709Z","steps":["trace[40953702] 'range keys from in-memory index tree'  (duration: 204.077931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:30:58.609473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.296763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:7202"}
	{"level":"info","ts":"2024-05-28T20:30:58.609514Z","caller":"traceutil/trace.go:171","msg":"trace[1194676059] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1848; }","duration":"176.377763ms","start":"2024-05-28T20:30:58.433125Z","end":"2024-05-28T20:30:58.609503Z","steps":["trace[1194676059] 'range keys from in-memory index tree'  (duration: 175.826761ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:30:58.798797Z","caller":"traceutil/trace.go:171","msg":"trace[1714504399] transaction","detail":"{read_only:false; response_revision:1849; number_of_response:1; }","duration":"171.037632ms","start":"2024-05-28T20:30:58.627734Z","end":"2024-05-28T20:30:58.798771Z","steps":["trace[1714504399] 'process raft request'  (duration: 170.373528ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:30:58.799259Z","caller":"traceutil/trace.go:171","msg":"trace[1659742274] linearizableReadLoop","detail":"{readStateIndex:1942; appliedIndex:1941; }","duration":"170.022526ms","start":"2024-05-28T20:30:58.629224Z","end":"2024-05-28T20:30:58.799247Z","steps":["trace[1659742274] 'read index received'  (duration: 168.744518ms)","trace[1659742274] 'applied index is now lower than readState.Index'  (duration: 1.276308ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T20:30:58.799499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.227527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-28T20:30:58.799649Z","caller":"traceutil/trace.go:171","msg":"trace[571993235] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1849; }","duration":"170.357828ms","start":"2024-05-28T20:30:58.629205Z","end":"2024-05-28T20:30:58.799563Z","steps":["trace[571993235] 'agreement among raft nodes before linearized reading'  (duration: 170.157026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:31:03.241074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.550976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-854100\" ","response":"range_response_count:1 size:7928"}
	{"level":"info","ts":"2024-05-28T20:31:03.241143Z","caller":"traceutil/trace.go:171","msg":"trace[788354463] range","detail":"{range_begin:/registry/minions/addons-854100; range_end:; response_count:1; response_revision:1862; }","duration":"229.741278ms","start":"2024-05-28T20:31:03.011384Z","end":"2024-05-28T20:31:03.241125Z","steps":["trace[788354463] 'range keys from in-memory index tree'  (duration: 229.415576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:31:03.241482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.182055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/bus.volcano.sh/commands/\" range_end:\"/registry/bus.volcano.sh/commands0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:31:03.241515Z","caller":"traceutil/trace.go:171","msg":"trace[1333401973] range","detail":"{range_begin:/registry/bus.volcano.sh/commands/; range_end:/registry/bus.volcano.sh/commands0; response_count:0; response_revision:1862; }","duration":"209.241755ms","start":"2024-05-28T20:31:03.032264Z","end":"2024-05-28T20:31:03.241506Z","steps":["trace[1333401973] 'count revisions from in-memory index tree'  (duration: 209.086454ms)"],"step_count":1}
	
	
	==> gcp-auth [2192c2b7c686] <==
	2024/05/28 20:30:05 GCP Auth Webhook started!
	2024/05/28 20:30:11 Ready to marshal response ...
	2024/05/28 20:30:11 Ready to write response ...
	2024/05/28 20:30:16 Ready to marshal response ...
	2024/05/28 20:30:16 Ready to write response ...
	2024/05/28 20:30:23 Ready to marshal response ...
	2024/05/28 20:30:23 Ready to write response ...
	2024/05/28 20:30:23 Ready to marshal response ...
	2024/05/28 20:30:23 Ready to write response ...
	2024/05/28 20:30:23 Ready to marshal response ...
	2024/05/28 20:30:23 Ready to write response ...
	2024/05/28 20:30:46 Ready to marshal response ...
	2024/05/28 20:30:46 Ready to write response ...
	2024/05/28 20:30:46 Ready to marshal response ...
	2024/05/28 20:30:46 Ready to write response ...
	2024/05/28 20:30:52 Ready to marshal response ...
	2024/05/28 20:30:52 Ready to write response ...
	2024/05/28 20:30:52 Ready to marshal response ...
	2024/05/28 20:30:52 Ready to write response ...
	
	
	==> kernel <==
	 20:31:03 up 7 min,  0 users,  load average: 2.08, 2.35, 1.20
	Linux addons-854100 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f5751b38aacc] <==
	I0528 20:30:34.738594       1 trace.go:236] Trace[188732085]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a3a92060-a6cc-4eb6-a134-861e6333bc66,client:172.18.209.103,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:gadget,protocol:HTTP/2.0,resource:events,scope:namespace,url:/apis/events.k8s.io/v1/namespaces/gadget/events,user-agent:kube-controller-manager/v1.30.1 (linux/amd64) kubernetes/6911225/system:serviceaccount:kube-system:namespace-controller,verb:DELETE (28-May-2024 20:30:34.153) (total time: 585ms):
	Trace[188732085]: ---"About to write a response" 582ms (20:30:34.738)
	Trace[188732085]: [585.136269ms] [585.136269ms] END
	I0528 20:30:36.200036       1 trace.go:236] Trace[1080384399]: "Delete" accept:application/json,audit-id:224f6c23-6a84-4a74-853f-3b8ac647aac8,client:127.0.0.1,api-group:,api-version:v1,name:tiller,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/tiller,user-agent:kubectl/v1.30.1 (linux/amd64) kubernetes/6911225,verb:DELETE (28-May-2024 20:30:35.547) (total time: 652ms):
	Trace[1080384399]: ---"Object deleted from database" 652ms (20:30:36.199)
	Trace[1080384399]: [652.731496ms] [652.731496ms] END
	I0528 20:30:36.209259       1 trace.go:236] Trace[506252174]: "Update" accept:application/json, */*,audit-id:ef633b88-b4d0-4c9b-8e81-a9ba261776aa,client:172.18.209.103,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-May-2024 20:30:35.551) (total time: 657ms):
	Trace[506252174]: ["GuaranteedUpdate etcd3" audit-id:ef633b88-b4d0-4c9b-8e81-a9ba261776aa,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 657ms (20:30:35.552)
	Trace[506252174]:  ---"Txn call completed" 656ms (20:30:36.209)]
	Trace[506252174]: [657.74882ms] [657.74882ms] END
	I0528 20:30:36.215313       1 trace.go:236] Trace[301578713]: "Get" accept:application/vnd.kubernetes.protobuf;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json;as=PartialObjectMetadata;g=meta.k8s.io;v=v1,application/json,audit-id:d20bba5f-1327-484b-a961-ad44680e6018,client:172.18.209.103,api-group:apps,api-version:v1,name:tiller-deploy-6677d64bcd,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:replicasets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/replicasets/tiller-deploy-6677d64bcd,user-agent:kube-controller-manager/v1.30.1 (linux/amd64) kubernetes/6911225/system:serviceaccount:kube-system:generic-garbage-collector,verb:GET (28-May-2024 20:30:35.553) (total time: 662ms):
	Trace[301578713]: ---"About to write a response" 662ms (20:30:36.215)
	Trace[301578713]: [662.196842ms] [662.196842ms] END
	I0528 20:30:36.216225       1 trace.go:236] Trace[425670694]: "Update" accept:application/json, */*,audit-id:89b6b37d-7755-4a60-bdd7-0cb15046323d,client:10.244.0.11,api-group:coordination.k8s.io,api-version:v1,name:snapshot-controller-leader,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-May-2024 20:30:35.551) (total time: 664ms):
	Trace[425670694]: ["GuaranteedUpdate etcd3" audit-id:89b6b37d-7755-4a60-bdd7-0cb15046323d,key:/leases/kube-system/snapshot-controller-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 662ms (20:30:35.554)
	Trace[425670694]:  ---"Txn call completed" 661ms (20:30:36.216)]
	Trace[425670694]: [664.607353ms] [664.607353ms] END
	E0528 20:30:37.579353       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0528 20:30:52.160086       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0528 20:30:52.225767       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0528 20:30:58.349592       1 trace.go:236] Trace[1815028846]: "Patch" accept:application/json, */*,audit-id:bfcec831-8295-4c7b-af13-44b63dcc3609,client:10.244.0.18,api-group:,api-version:v1,name:test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c,subresource:,namespace:my-volcano,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/my-volcano/events/test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c,user-agent:vc-scheduler/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PATCH (28-May-2024 20:30:57.846) (total time: 503ms):
	Trace[1815028846]: ["GuaranteedUpdate etcd3" audit-id:bfcec831-8295-4c7b-af13-44b63dcc3609,key:/events/my-volcano/test-job-19d18e5c-4036-499c-9a22-e99680b94fe8.17d3c1266b6f478c,type:*core.Event,resource:events 503ms (20:30:57.846)
	Trace[1815028846]:  ---"Txn call completed" 500ms (20:30:58.349)]
	Trace[1815028846]: ---"Object stored in database" 500ms (20:30:58.349)
	Trace[1815028846]: [503.468731ms] [503.468731ms] END
	
	
	==> kube-controller-manager [999fad69c51e] <==
	I0528 20:30:22.224634       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0528 20:30:22.245374       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0528 20:30:23.587454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="159.743406ms"
	I0528 20:30:23.616179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="28.282407ms"
	I0528 20:30:23.616596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="99.5µs"
	E0528 20:30:30.143204       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:31.252082       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0528 20:30:31.253366       1 shared_informer.go:320] Caches are synced for resource quota
	W0528 20:30:31.570056       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:31.570158       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:31.623039       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0528 20:30:31.623255       1 shared_informer.go:320] Caches are synced for garbage collector
	W0528 20:30:33.616713       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:33.616754       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:36.242621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="361.102µs"
	W0528 20:30:36.915297       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:36.915594       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:37.810161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="231.3µs"
	I0528 20:30:37.921189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="59.602473ms"
	I0528 20:30:37.921292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-68456f997b" duration="46.8µs"
	I0528 20:30:40.301340       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="83.2µs"
	W0528 20:30:45.209119       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:45.209162       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:45.220546       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0528 20:30:51.799276       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	
	
	==> kube-proxy [446fa0be70d9] <==
	I0528 20:26:11.756983       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:26:12.011225       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.209.103"]
	I0528 20:26:12.408294       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:26:12.408463       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:26:12.408508       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:26:12.458631       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:26:12.459039       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:26:12.459065       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:26:12.489869       1 config.go:192] "Starting service config controller"
	I0528 20:26:12.490054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:26:12.490211       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:26:12.491847       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:26:12.491479       1 config.go:319] "Starting node config controller"
	I0528 20:26:12.492174       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:26:12.600715       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:26:12.601125       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:26:12.697651       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [610fb9d63d91] <==
	W0528 20:25:45.599376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:25:45.599581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:25:45.612576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:25:45.612782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 20:25:45.671306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:25:45.671651       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:25:45.678757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:25:45.680427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 20:25:45.683317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:25:45.684005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:25:45.690005       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:25:45.690134       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 20:25:45.701479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 20:25:45.702239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:25:45.717066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 20:25:45.717130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 20:25:45.759334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:25:45.759751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:25:45.787052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:25:45.787151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:25:45.867298       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:25:45.867353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:25:46.043720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:25:46.044725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0528 20:25:48.441270       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.001876    2120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2549abd5-e973-4727-bda6-cfd4c0fee1bc" path="/var/lib/kubelet/pods/2549abd5-e973-4727-bda6-cfd4c0fee1bc/volumes"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.004439    2120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cafa888-4260-4fae-a639-f0f02d1b3417" path="/var/lib/kubelet/pods/6cafa888-4260-4fae-a639-f0f02d1b3417/volumes"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.248529    2120 topology_manager.go:215] "Topology Admit Handler" podUID="2232add3-221f-4443-980b-11d6eb7b355c" podNamespace="default" podName="test-local-path"
	May 28 20:30:52 addons-854100 kubelet[2120]: E0528 20:30:52.249744    2120 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d2fd006-f087-478b-b49c-a0b12ebbcf40" containerName="gadget"
	May 28 20:30:52 addons-854100 kubelet[2120]: E0528 20:30:52.250153    2120 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d2fd006-f087-478b-b49c-a0b12ebbcf40" containerName="gadget"
	May 28 20:30:52 addons-854100 kubelet[2120]: E0528 20:30:52.250352    2120 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2549abd5-e973-4727-bda6-cfd4c0fee1bc" containerName="helper-pod"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.250781    2120 memory_manager.go:354] "RemoveStaleState removing state" podUID="2549abd5-e973-4727-bda6-cfd4c0fee1bc" containerName="helper-pod"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.251126    2120 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2fd006-f087-478b-b49c-a0b12ebbcf40" containerName="gadget"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.328533    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pht57\" (UniqueName: \"kubernetes.io/projected/2232add3-221f-4443-980b-11d6eb7b355c-kube-api-access-pht57\") pod \"test-local-path\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") " pod="default/test-local-path"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.329352    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-72f9c62a-a060-44b9-a171-fede391dec31\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-pvc-72f9c62a-a060-44b9-a171-fede391dec31\") pod \"test-local-path\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") " pod="default/test-local-path"
	May 28 20:30:52 addons-854100 kubelet[2120]: I0528 20:30:52.329616    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-gcp-creds\") pod \"test-local-path\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") " pod="default/test-local-path"
	May 28 20:30:53 addons-854100 kubelet[2120]: I0528 20:30:53.106550    2120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc"
	May 28 20:30:53 addons-854100 kubelet[2120]: I0528 20:30:53.693743    2120 topology_manager.go:215] "Topology Admit Handler" podUID="7af9523d-0192-4135-8854-2d15a10b594a" podNamespace="my-volcano" podName="test-job-nginx-0"
	May 28 20:30:53 addons-854100 kubelet[2120]: I0528 20:30:53.744776    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm6xf\" (UniqueName: \"kubernetes.io/projected/7af9523d-0192-4135-8854-2d15a10b594a-kube-api-access-jm6xf\") pod \"test-job-nginx-0\" (UID: \"7af9523d-0192-4135-8854-2d15a10b594a\") " pod="my-volcano/test-job-nginx-0"
	May 28 20:30:53 addons-854100 kubelet[2120]: I0528 20:30:53.745144    2120 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7af9523d-0192-4135-8854-2d15a10b594a-gcp-creds\") pod \"test-job-nginx-0\" (UID: \"7af9523d-0192-4135-8854-2d15a10b594a\") " pod="my-volcano/test-job-nginx-0"
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.014860    2120 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pht57\" (UniqueName: \"kubernetes.io/projected/2232add3-221f-4443-980b-11d6eb7b355c-kube-api-access-pht57\") pod \"2232add3-221f-4443-980b-11d6eb7b355c\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") "
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.014977    2120 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-pvc-72f9c62a-a060-44b9-a171-fede391dec31\") pod \"2232add3-221f-4443-980b-11d6eb7b355c\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") "
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.015018    2120 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-gcp-creds\") pod \"2232add3-221f-4443-980b-11d6eb7b355c\" (UID: \"2232add3-221f-4443-980b-11d6eb7b355c\") "
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.015117    2120 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2232add3-221f-4443-980b-11d6eb7b355c" (UID: "2232add3-221f-4443-980b-11d6eb7b355c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.015159    2120 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-pvc-72f9c62a-a060-44b9-a171-fede391dec31" (OuterVolumeSpecName: "data") pod "2232add3-221f-4443-980b-11d6eb7b355c" (UID: "2232add3-221f-4443-980b-11d6eb7b355c"). InnerVolumeSpecName "pvc-72f9c62a-a060-44b9-a171-fede391dec31". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.020311    2120 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2232add3-221f-4443-980b-11d6eb7b355c-kube-api-access-pht57" (OuterVolumeSpecName: "kube-api-access-pht57") pod "2232add3-221f-4443-980b-11d6eb7b355c" (UID: "2232add3-221f-4443-980b-11d6eb7b355c"). InnerVolumeSpecName "kube-api-access-pht57". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.115932    2120 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-gcp-creds\") on node \"addons-854100\" DevicePath \"\""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.115992    2120 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pht57\" (UniqueName: \"kubernetes.io/projected/2232add3-221f-4443-980b-11d6eb7b355c-kube-api-access-pht57\") on node \"addons-854100\" DevicePath \"\""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.116027    2120 reconciler_common.go:289] "Volume detached for volume \"pvc-72f9c62a-a060-44b9-a171-fede391dec31\" (UniqueName: \"kubernetes.io/host-path/2232add3-221f-4443-980b-11d6eb7b355c-pvc-72f9c62a-a060-44b9-a171-fede391dec31\") on node \"addons-854100\" DevicePath \"\""
	May 28 20:30:59 addons-854100 kubelet[2120]: I0528 20:30:59.674248    2120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df8e090e10c934c98de3b72dd9d570664ea1dc3617c42c610d4f06c09b4613dc"
	
	
	==> storage-provisioner [a2c7177066b9] <==
	I0528 20:26:36.195142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 20:26:36.244064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 20:26:36.244118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 20:26:36.260254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 20:26:36.260419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-854100_75cfb314-e08b-45d4-9eb0-e23db08c919b!
	I0528 20:26:36.261714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5b1f2162-2bf9-456d-90f4-b811b4691d60", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-854100_75cfb314-e08b-45d4-9eb0-e23db08c919b became leader
	I0528 20:26:36.361220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-854100_75cfb314-e08b-45d4-9eb0-e23db08c919b!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:30:53.740361   15284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-854100 -n addons-854100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-854100 -n addons-854100: (13.8222888s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-854100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2b5s4 ingress-nginx-admission-patch-tzk22
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-854100 describe pod ingress-nginx-admission-create-2b5s4 ingress-nginx-admission-patch-tzk22
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-854100 describe pod ingress-nginx-admission-create-2b5s4 ingress-nginx-admission-patch-tzk22: exit status 1 (215.3461ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2b5s4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tzk22" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-854100 describe pod ingress-nginx-admission-create-2b5s4 ingress-nginx-admission-patch-tzk22: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.38s)

                                                
                                    
x
+
TestErrorSpam/setup (200.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-999500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-999500 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 --driver=hyperv: (3m20.5438545s)
error_spam_test.go:96: unexpected stderr: "W0528 13:35:21.537929    8612 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-999500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=18966
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-999500" primary control-plane node in "nospam-999500" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-999500" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0528 13:35:21.537929    8612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (200.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (34.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-200200 -n functional-200200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-200200 -n functional-200200: (12.237348s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 logs -n 25: (8.7669873s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:39 PDT | 28 May 24 13:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:40 PDT | 28 May 24 13:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:40 PDT | 28 May 24 13:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:40 PDT | 28 May 24 13:40 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:40 PDT | 28 May 24 13:41 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:41 PDT | 28 May 24 13:41 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-999500 --log_dir                                     | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:41 PDT | 28 May 24 13:41 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-999500                                            | nospam-999500     | minikube1\jenkins | v1.33.1 | 28 May 24 13:41 PDT | 28 May 24 13:41 PDT |
	| start   | -p functional-200200                                        | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:41 PDT | 28 May 24 13:45 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-200200                                        | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:45 PDT | 28 May 24 13:47 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache add                                 | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:47 PDT | 28 May 24 13:47 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache add                                 | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:47 PDT | 28 May 24 13:48 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache add                                 | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache add                                 | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | minikube-local-cache-test:functional-200200                 |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache delete                              | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | minikube-local-cache-test:functional-200200                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	| ssh     | functional-200200 ssh sudo                                  | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-200200                                           | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:48 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-200200 ssh                                       | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-200200 cache reload                              | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:48 PDT | 28 May 24 13:49 PDT |
	| ssh     | functional-200200 ssh                                       | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:49 PDT | 28 May 24 13:49 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 28 May 24 13:49 PDT | 28 May 24 13:49 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.33.1 | 28 May 24 13:49 PDT | 28 May 24 13:49 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-200200 kubectl --                                | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:49 PDT | 28 May 24 13:49 PDT |
	|         | --context functional-200200                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 13:45:39
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 13:45:39.131371    4516 out.go:291] Setting OutFile to fd 664 ...
	I0528 13:45:39.131964    4516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:45:39.131964    4516 out.go:304] Setting ErrFile to fd 976...
	I0528 13:45:39.131964    4516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:45:39.153977    4516 out.go:298] Setting JSON to false
	I0528 13:45:39.157071    4516 start.go:129] hostinfo: {"hostname":"minikube1","uptime":1903,"bootTime":1716927235,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:45:39.157071    4516 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:45:39.160572    4516 out.go:177] * [functional-200200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 13:45:39.165234    4516 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:45:39.165234    4516 notify.go:220] Checking for updates...
	I0528 13:45:39.167847    4516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 13:45:39.170862    4516 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:45:39.173292    4516 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 13:45:39.176190    4516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 13:45:39.179519    4516 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:45:39.179519    4516 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 13:45:44.598055    4516 out.go:177] * Using the hyperv driver based on existing profile
	I0528 13:45:44.601503    4516 start.go:297] selected driver: hyperv
	I0528 13:45:44.601503    4516 start.go:901] validating driver "hyperv" against &{Name:functional-200200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:functional-200200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.211.78 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:45:44.602251    4516 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 13:45:44.651387    4516 cni.go:84] Creating CNI manager for ""
	I0528 13:45:44.651970    4516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:45:44.652207    4516 start.go:340] cluster config:
	{Name:functional-200200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-200200 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.211.78 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:45:44.652277    4516 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 13:45:44.655700    4516 out.go:177] * Starting "functional-200200" primary control-plane node in "functional-200200" cluster
	I0528 13:45:44.658326    4516 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:45:44.659280    4516 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 13:45:44.659280    4516 cache.go:56] Caching tarball of preloaded images
	I0528 13:45:44.659280    4516 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 13:45:44.659280    4516 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 13:45:44.659873    4516 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\config.json ...
	I0528 13:45:44.662135    4516 start.go:360] acquireMachinesLock for functional-200200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 13:45:44.662433    4516 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-200200"
	I0528 13:45:44.662550    4516 start.go:96] Skipping create...Using existing machine configuration
	I0528 13:45:44.662550    4516 fix.go:54] fixHost starting: 
	I0528 13:45:44.663121    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:45:47.458376    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:45:47.458376    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:47.458376    4516 fix.go:112] recreateIfNeeded on functional-200200: state=Running err=<nil>
	W0528 13:45:47.458661    4516 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 13:45:47.462183    4516 out.go:177] * Updating the running hyperv "functional-200200" VM ...
	I0528 13:45:47.464237    4516 machine.go:94] provisionDockerMachine start ...
	I0528 13:45:47.464824    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:45:49.648258    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:45:49.648258    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:49.648397    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:45:52.250637    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:45:52.251656    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:52.258433    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:45:52.259188    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:45:52.259188    4516 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 13:45:52.382492    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-200200
	
	I0528 13:45:52.382492    4516 buildroot.go:166] provisioning hostname "functional-200200"
	I0528 13:45:52.382685    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:45:54.560860    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:45:54.560860    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:54.561457    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:45:57.134829    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:45:57.134829    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:57.140471    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:45:57.140819    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:45:57.140819    4516 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-200200 && echo "functional-200200" | sudo tee /etc/hostname
	I0528 13:45:57.297434    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-200200
	
	I0528 13:45:57.297558    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:45:59.501641    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:45:59.501641    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:45:59.501641    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:02.119072    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:02.119072    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:02.125246    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:02.125782    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:02.125861    4516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-200200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-200200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-200200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 13:46:02.251674    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 13:46:02.251674    4516 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 13:46:02.251674    4516 buildroot.go:174] setting up certificates
	I0528 13:46:02.251674    4516 provision.go:84] configureAuth start
	I0528 13:46:02.251674    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:04.431350    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:04.431350    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:04.431713    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:07.113010    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:07.113010    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:07.113010    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:09.308591    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:09.308826    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:09.309007    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:11.917865    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:11.917865    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:11.917865    4516 provision.go:143] copyHostCerts
	I0528 13:46:11.918300    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 13:46:11.918647    4516 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 13:46:11.918647    4516 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 13:46:11.918897    4516 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 13:46:11.920354    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 13:46:11.920618    4516 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 13:46:11.920618    4516 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 13:46:11.920836    4516 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 13:46:11.921197    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 13:46:11.922078    4516 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 13:46:11.922078    4516 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 13:46:11.922276    4516 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 13:46:11.922641    4516 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-200200 san=[127.0.0.1 172.18.211.78 functional-200200 localhost minikube]
	I0528 13:46:12.367805    4516 provision.go:177] copyRemoteCerts
	I0528 13:46:12.379784    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 13:46:12.379784    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:14.561070    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:14.561070    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:14.561310    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:17.152392    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:17.152635    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:17.152843    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:46:17.256882    4516 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8769881s)
	I0528 13:46:17.256882    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 13:46:17.257563    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 13:46:17.312351    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 13:46:17.312998    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 13:46:17.360115    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 13:46:17.360115    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 13:46:17.410210    4516 provision.go:87] duration metric: took 15.1585144s to configureAuth
	I0528 13:46:17.410274    4516 buildroot.go:189] setting minikube options for container-runtime
	I0528 13:46:17.410999    4516 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:46:17.411054    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:19.594421    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:19.594421    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:19.594963    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:22.209305    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:22.210296    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:22.216038    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:22.216277    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:22.216277    4516 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 13:46:22.343865    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 13:46:22.343980    4516 buildroot.go:70] root file system type: tmpfs
	I0528 13:46:22.344122    4516 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 13:46:22.344122    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:24.528265    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:24.528265    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:24.529077    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:27.109837    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:27.109946    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:27.115963    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:27.116482    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:27.116655    4516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 13:46:27.266596    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 13:46:27.267188    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:29.474536    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:29.474885    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:29.474885    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:32.048749    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:32.048801    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:32.055092    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:32.056285    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:32.056285    4516 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 13:46:32.201719    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 13:46:32.201854    4516 machine.go:97] duration metric: took 44.736908s to provisionDockerMachine
	I0528 13:46:32.201854    4516 start.go:293] postStartSetup for "functional-200200" (driver="hyperv")
	I0528 13:46:32.201910    4516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 13:46:32.215384    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 13:46:32.215384    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:34.383295    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:34.384133    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:34.384133    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:36.966206    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:36.967208    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:36.967550    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:46:37.068460    4516 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8529974s)
	I0528 13:46:37.080619    4516 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 13:46:37.087557    4516 command_runner.go:130] > NAME=Buildroot
	I0528 13:46:37.087557    4516 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 13:46:37.087557    4516 command_runner.go:130] > ID=buildroot
	I0528 13:46:37.087557    4516 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 13:46:37.087557    4516 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 13:46:37.087557    4516 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 13:46:37.087557    4516 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 13:46:37.088325    4516 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 13:46:37.088803    4516 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 13:46:37.089507    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 13:46:37.090302    4516 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1780\hosts -> hosts in /etc/test/nested/copy/1780
	I0528 13:46:37.090302    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1780\hosts -> /etc/test/nested/copy/1780/hosts
	I0528 13:46:37.101894    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1780
	I0528 13:46:37.118902    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 13:46:37.170613    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1780\hosts --> /etc/test/nested/copy/1780/hosts (40 bytes)
	I0528 13:46:37.222822    4516 start.go:296] duration metric: took 5.0209606s for postStartSetup
	I0528 13:46:37.223001    4516 fix.go:56] duration metric: took 52.5603775s for fixHost
	I0528 13:46:37.223074    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:39.366211    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:39.366211    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:39.366211    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:41.907063    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:41.907131    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:41.914467    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:41.914659    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:41.914659    4516 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 13:46:42.041537    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716929202.041951874
	
	I0528 13:46:42.041537    4516 fix.go:216] guest clock: 1716929202.041951874
	I0528 13:46:42.041537    4516 fix.go:229] Guest: 2024-05-28 13:46:42.041951874 -0700 PDT Remote: 2024-05-28 13:46:37.2230013 -0700 PDT m=+58.188157801 (delta=4.818950574s)
	I0528 13:46:42.041537    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:44.215886    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:44.216647    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:44.217207    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:46.772032    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:46.772032    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:46.778669    4516 main.go:141] libmachine: Using SSH client type: native
	I0528 13:46:46.778805    4516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.78 22 <nil> <nil>}
	I0528 13:46:46.778805    4516 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716929202
	I0528 13:46:46.918403    4516 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 20:46:42 UTC 2024
	
	I0528 13:46:46.918471    4516 fix.go:236] clock set: Tue May 28 20:46:42 UTC 2024
	 (err=<nil>)
	I0528 13:46:46.918471    4516 start.go:83] releasing machines lock for "functional-200200", held for 1m2.2559112s
	I0528 13:46:46.918725    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:49.095967    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:49.095967    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:49.096161    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:51.678498    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:51.678540    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:51.682798    4516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 13:46:51.682998    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:51.693830    4516 ssh_runner.go:195] Run: cat /version.json
	I0528 13:46:51.693830    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:46:53.896705    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:53.896705    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:53.897547    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:53.909668    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:46:53.910203    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:53.910278    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:46:56.620664    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:56.621020    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:56.621219    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:46:56.648228    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:46:56.648228    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:46:56.648228    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:46:56.776294    4516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 13:46:56.776981    4516 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0941205s)
	I0528 13:46:56.776981    4516 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 13:46:56.776981    4516 ssh_runner.go:235] Completed: cat /version.json: (5.0831444s)
	I0528 13:46:56.789878    4516 ssh_runner.go:195] Run: systemctl --version
	I0528 13:46:56.799215    4516 command_runner.go:130] > systemd 252 (252)
	I0528 13:46:56.799325    4516 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 13:46:56.814314    4516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 13:46:56.822297    4516 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 13:46:56.822979    4516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 13:46:56.834105    4516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 13:46:56.853798    4516 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 13:46:56.853798    4516 start.go:494] detecting cgroup driver to use...
	I0528 13:46:56.853798    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 13:46:56.888039    4516 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 13:46:56.901376    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 13:46:56.931438    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 13:46:56.952271    4516 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 13:46:56.965132    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 13:46:56.999362    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 13:46:57.031914    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 13:46:57.066537    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 13:46:57.099950    4516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 13:46:57.133944    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 13:46:57.165506    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 13:46:57.197659    4516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 13:46:57.228156    4516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 13:46:57.246543    4516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 13:46:57.258298    4516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 13:46:57.288651    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:46:57.577184    4516 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 13:46:57.614164    4516 start.go:494] detecting cgroup driver to use...
	I0528 13:46:57.627234    4516 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 13:46:57.650623    4516 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 13:46:57.650623    4516 command_runner.go:130] > [Unit]
	I0528 13:46:57.650623    4516 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 13:46:57.650623    4516 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 13:46:57.650623    4516 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 13:46:57.650623    4516 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 13:46:57.650623    4516 command_runner.go:130] > StartLimitBurst=3
	I0528 13:46:57.650623    4516 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 13:46:57.650623    4516 command_runner.go:130] > [Service]
	I0528 13:46:57.650623    4516 command_runner.go:130] > Type=notify
	I0528 13:46:57.650623    4516 command_runner.go:130] > Restart=on-failure
	I0528 13:46:57.650623    4516 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 13:46:57.650623    4516 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 13:46:57.650623    4516 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 13:46:57.650623    4516 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 13:46:57.650623    4516 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 13:46:57.650623    4516 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 13:46:57.650623    4516 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 13:46:57.650623    4516 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 13:46:57.650623    4516 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 13:46:57.650623    4516 command_runner.go:130] > ExecStart=
	I0528 13:46:57.650623    4516 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 13:46:57.650623    4516 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 13:46:57.651757    4516 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 13:46:57.651757    4516 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 13:46:57.651757    4516 command_runner.go:130] > LimitNOFILE=infinity
	I0528 13:46:57.651757    4516 command_runner.go:130] > LimitNPROC=infinity
	I0528 13:46:57.651757    4516 command_runner.go:130] > LimitCORE=infinity
	I0528 13:46:57.651815    4516 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 13:46:57.651815    4516 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 13:46:57.651815    4516 command_runner.go:130] > TasksMax=infinity
	I0528 13:46:57.651815    4516 command_runner.go:130] > TimeoutStartSec=0
	I0528 13:46:57.651870    4516 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 13:46:57.651870    4516 command_runner.go:130] > Delegate=yes
	I0528 13:46:57.651911    4516 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 13:46:57.651911    4516 command_runner.go:130] > KillMode=process
	I0528 13:46:57.651911    4516 command_runner.go:130] > [Install]
	I0528 13:46:57.651956    4516 command_runner.go:130] > WantedBy=multi-user.target
	I0528 13:46:57.664228    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 13:46:57.706108    4516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 13:46:57.751213    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 13:46:57.787296    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 13:46:57.817855    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 13:46:57.856622    4516 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 13:46:57.873383    4516 ssh_runner.go:195] Run: which cri-dockerd
	I0528 13:46:57.880168    4516 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 13:46:57.893174    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 13:46:57.916684    4516 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 13:46:57.962996    4516 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 13:46:58.245443    4516 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 13:46:58.497579    4516 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 13:46:58.497579    4516 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 13:46:58.546134    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:46:58.815205    4516 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 13:47:11.629705    4516 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8144814s)
	I0528 13:47:11.642056    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 13:47:11.681031    4516 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0528 13:47:11.738683    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 13:47:11.776907    4516 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 13:47:11.980903    4516 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 13:47:12.182002    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:47:12.398742    4516 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 13:47:12.442463    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 13:47:12.483966    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:47:12.684437    4516 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 13:47:12.808286    4516 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 13:47:12.820913    4516 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 13:47:12.829622    4516 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0528 13:47:12.829622    4516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 13:47:12.829622    4516 command_runner.go:130] > Device: 0,22	Inode: 1433        Links: 1
	I0528 13:47:12.829622    4516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0528 13:47:12.829622    4516 command_runner.go:130] > Access: 2024-05-28 20:47:12.710774637 +0000
	I0528 13:47:12.829622    4516 command_runner.go:130] > Modify: 2024-05-28 20:47:12.710774637 +0000
	I0528 13:47:12.829622    4516 command_runner.go:130] > Change: 2024-05-28 20:47:12.713774672 +0000
	I0528 13:47:12.829622    4516 command_runner.go:130] >  Birth: -
	I0528 13:47:12.829622    4516 start.go:562] Will wait 60s for crictl version
	I0528 13:47:12.843296    4516 ssh_runner.go:195] Run: which crictl
	I0528 13:47:12.849550    4516 command_runner.go:130] > /usr/bin/crictl
	I0528 13:47:12.863313    4516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 13:47:12.915290    4516 command_runner.go:130] > Version:  0.1.0
	I0528 13:47:12.915435    4516 command_runner.go:130] > RuntimeName:  docker
	I0528 13:47:12.915435    4516 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0528 13:47:12.915478    4516 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 13:47:12.915506    4516 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 13:47:12.926386    4516 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 13:47:12.959864    4516 command_runner.go:130] > 26.0.2
	I0528 13:47:12.971205    4516 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 13:47:13.002327    4516 command_runner.go:130] > 26.0.2
	I0528 13:47:13.008094    4516 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 13:47:13.008314    4516 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 13:47:13.012334    4516 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 13:47:13.012334    4516 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 13:47:13.012334    4516 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 13:47:13.012334    4516 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 13:47:13.015090    4516 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 13:47:13.015090    4516 ip.go:210] interface addr: 172.18.208.1/20
	I0528 13:47:13.027337    4516 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 13:47:13.034192    4516 command_runner.go:130] > 172.18.208.1	host.minikube.internal
	I0528 13:47:13.034676    4516 kubeadm.go:877] updating cluster {Name:functional-200200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.1 ClusterName:functional-200200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.211.78 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 13:47:13.034816    4516 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:47:13.043774    4516 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 13:47:13.071359    4516 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 13:47:13.071396    4516 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 13:47:13.071457    4516 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 13:47:13.071457    4516 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 13:47:13.071457    4516 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 13:47:13.071457    4516 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 13:47:13.071457    4516 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 13:47:13.071457    4516 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 13:47:13.071977    4516 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 13:47:13.071977    4516 docker.go:615] Images already preloaded, skipping extraction
	I0528 13:47:13.082035    4516 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 13:47:13.108105    4516 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 13:47:13.113008    4516 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 13:47:13.113008    4516 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 13:47:13.113105    4516 cache_images.go:84] Images are preloaded, skipping loading
	I0528 13:47:13.113105    4516 kubeadm.go:928] updating node { 172.18.211.78 8441 v1.30.1 docker true true} ...
	I0528 13:47:13.113333    4516 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-200200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.211.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-200200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 13:47:13.123027    4516 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 13:47:13.163326    4516 command_runner.go:130] > cgroupfs
	I0528 13:47:13.163619    4516 cni.go:84] Creating CNI manager for ""
	I0528 13:47:13.163651    4516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:47:13.163712    4516 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 13:47:13.163839    4516 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.211.78 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-200200 NodeName:functional-200200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.211.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.211.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 13:47:13.164139    4516 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.211.78
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-200200"
	  kubeletExtraArgs:
	    node-ip: 172.18.211.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.211.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 13:47:13.177654    4516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 13:47:13.196856    4516 command_runner.go:130] > kubeadm
	I0528 13:47:13.196856    4516 command_runner.go:130] > kubectl
	I0528 13:47:13.196856    4516 command_runner.go:130] > kubelet
	I0528 13:47:13.196856    4516 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 13:47:13.210068    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 13:47:13.228346    4516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0528 13:47:13.265121    4516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 13:47:13.300782    4516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0528 13:47:13.346649    4516 ssh_runner.go:195] Run: grep 172.18.211.78	control-plane.minikube.internal$ /etc/hosts
	I0528 13:47:13.354305    4516 command_runner.go:130] > 172.18.211.78	control-plane.minikube.internal
	I0528 13:47:13.366197    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:47:13.580202    4516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 13:47:13.607744    4516 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200 for IP: 172.18.211.78
	I0528 13:47:13.607744    4516 certs.go:194] generating shared ca certs ...
	I0528 13:47:13.607744    4516 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:47:13.608681    4516 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 13:47:13.608681    4516 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 13:47:13.608681    4516 certs.go:256] generating profile certs ...
	I0528 13:47:13.610328    4516 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.key
	I0528 13:47:13.610667    4516 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\apiserver.key.35063046
	I0528 13:47:13.610993    4516 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\proxy-client.key
	I0528 13:47:13.611106    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 13:47:13.611281    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 13:47:13.611455    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 13:47:13.611661    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 13:47:13.611823    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 13:47:13.611994    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 13:47:13.612171    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 13:47:13.612343    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 13:47:13.612958    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 13:47:13.613289    4516 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 13:47:13.613390    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 13:47:13.613729    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 13:47:13.614004    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 13:47:13.614332    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 13:47:13.614332    4516 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 13:47:13.614915    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 13:47:13.615168    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:47:13.615168    4516 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 13:47:13.615913    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 13:47:13.664331    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 13:47:13.712316    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 13:47:13.769989    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 13:47:13.854509    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 13:47:13.926146    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 13:47:13.983752    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 13:47:14.040615    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 13:47:14.102071    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 13:47:14.157259    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 13:47:14.209964    4516 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 13:47:14.278549    4516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 13:47:14.330142    4516 ssh_runner.go:195] Run: openssl version
	I0528 13:47:14.338269    4516 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 13:47:14.352839    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 13:47:14.386983    4516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:47:14.395055    4516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:47:14.395383    4516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:47:14.408222    4516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 13:47:14.416366    4516 command_runner.go:130] > b5213941
	I0528 13:47:14.429669    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 13:47:14.461599    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 13:47:14.495574    4516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 13:47:14.502265    4516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 13:47:14.502265    4516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 13:47:14.515952    4516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 13:47:14.527305    4516 command_runner.go:130] > 51391683
	I0528 13:47:14.540348    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 13:47:14.580981    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 13:47:14.616120    4516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 13:47:14.626309    4516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 13:47:14.626789    4516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 13:47:14.640075    4516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 13:47:14.657361    4516 command_runner.go:130] > 3ec20f2e
	I0528 13:47:14.671812    4516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 13:47:14.709957    4516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 13:47:14.721129    4516 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 13:47:14.721225    4516 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0528 13:47:14.721225    4516 command_runner.go:130] > Device: 8,1	Inode: 8384338     Links: 1
	I0528 13:47:14.721225    4516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 13:47:14.721271    4516 command_runner.go:130] > Access: 2024-05-28 20:44:29.644072893 +0000
	I0528 13:47:14.721271    4516 command_runner.go:130] > Modify: 2024-05-28 20:44:29.644072893 +0000
	I0528 13:47:14.721313    4516 command_runner.go:130] > Change: 2024-05-28 20:44:29.644072893 +0000
	I0528 13:47:14.721331    4516 command_runner.go:130] >  Birth: 2024-05-28 20:44:29.644072893 +0000
	I0528 13:47:14.734037    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 13:47:14.748068    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.761024    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 13:47:14.773156    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.785531    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 13:47:14.806032    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.820418    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 13:47:14.832653    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.845700    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 13:47:14.863232    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.876598    4516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 13:47:14.889576    4516 command_runner.go:130] > Certificate will not expire
	I0528 13:47:14.890341    4516 kubeadm.go:391] StartCluster: {Name:functional-200200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:functional-200200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.211.78 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:47:14.900182    4516 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 13:47:14.964591    4516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 13:47:14.990167    4516 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0528 13:47:14.990224    4516 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0528 13:47:14.990224    4516 command_runner.go:130] > /var/lib/minikube/etcd:
	I0528 13:47:14.990224    4516 command_runner.go:130] > member
	W0528 13:47:14.991365    4516 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 13:47:14.992434    4516 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 13:47:14.992519    4516 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 13:47:15.005135    4516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 13:47:15.030150    4516 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 13:47:15.031390    4516 kubeconfig.go:125] found "functional-200200" server: "https://172.18.211.78:8441"
	I0528 13:47:15.032520    4516 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:47:15.032829    4516 kapi.go:59] client config for functional-200200: &rest.Config{Host:"https://172.18.211.78:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-200200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-200200\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 13:47:15.034987    4516 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 13:47:15.051780    4516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 13:47:15.071046    4516 kubeadm.go:624] The running cluster does not require reconfiguration: 172.18.211.78
	I0528 13:47:15.071144    4516 kubeadm.go:1154] stopping kube-system containers ...
	I0528 13:47:15.080399    4516 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 13:47:15.141013    4516 command_runner.go:130] > 0083931a2039
	I0528 13:47:15.141289    4516 command_runner.go:130] > 39a267d9dee8
	I0528 13:47:15.141289    4516 command_runner.go:130] > 096de421cc93
	I0528 13:47:15.141289    4516 command_runner.go:130] > 421964bf5599
	I0528 13:47:15.141366    4516 command_runner.go:130] > 3c583f4b8962
	I0528 13:47:15.141366    4516 command_runner.go:130] > 788bf2870901
	I0528 13:47:15.141366    4516 command_runner.go:130] > a328e358e7d8
	I0528 13:47:15.141366    4516 command_runner.go:130] > 684e61568c39
	I0528 13:47:15.141366    4516 command_runner.go:130] > a6c2243527c9
	I0528 13:47:15.141366    4516 command_runner.go:130] > 9392d39bf7a1
	I0528 13:47:15.141428    4516 command_runner.go:130] > 097ed2bf877d
	I0528 13:47:15.141428    4516 command_runner.go:130] > aeb3b1fa0b4c
	I0528 13:47:15.141428    4516 command_runner.go:130] > 5cfea97ba889
	I0528 13:47:15.141428    4516 command_runner.go:130] > 6e66c448cf75
	I0528 13:47:15.141463    4516 command_runner.go:130] > 2fd08f1f3215
	I0528 13:47:15.141463    4516 command_runner.go:130] > b7c0ef99a550
	I0528 13:47:15.141463    4516 command_runner.go:130] > 799525f0c9d1
	I0528 13:47:15.141463    4516 command_runner.go:130] > 770e33bfd551
	I0528 13:47:15.141463    4516 command_runner.go:130] > be193c47e115
	I0528 13:47:15.141515    4516 command_runner.go:130] > 02cf6670d3cb
	I0528 13:47:15.141515    4516 command_runner.go:130] > a118279c1faa
	I0528 13:47:15.141515    4516 command_runner.go:130] > 2fe159cff253
	I0528 13:47:15.141557    4516 command_runner.go:130] > 1cc9a61a1710
	I0528 13:47:15.141557    4516 command_runner.go:130] > 32e4795f3077
	I0528 13:47:15.141693    4516 docker.go:483] Stopping containers: [0083931a2039 39a267d9dee8 096de421cc93 421964bf5599 3c583f4b8962 788bf2870901 a328e358e7d8 684e61568c39 a6c2243527c9 9392d39bf7a1 097ed2bf877d aeb3b1fa0b4c 5cfea97ba889 6e66c448cf75 2fd08f1f3215 b7c0ef99a550 799525f0c9d1 770e33bfd551 be193c47e115 02cf6670d3cb a118279c1faa 2fe159cff253 1cc9a61a1710 32e4795f3077]
	I0528 13:47:15.152579    4516 ssh_runner.go:195] Run: docker stop 0083931a2039 39a267d9dee8 096de421cc93 421964bf5599 3c583f4b8962 788bf2870901 a328e358e7d8 684e61568c39 a6c2243527c9 9392d39bf7a1 097ed2bf877d aeb3b1fa0b4c 5cfea97ba889 6e66c448cf75 2fd08f1f3215 b7c0ef99a550 799525f0c9d1 770e33bfd551 be193c47e115 02cf6670d3cb a118279c1faa 2fe159cff253 1cc9a61a1710 32e4795f3077
	I0528 13:47:15.971710    4516 command_runner.go:130] > 0083931a2039
	I0528 13:47:15.971710    4516 command_runner.go:130] > 39a267d9dee8
	I0528 13:47:15.971710    4516 command_runner.go:130] > 096de421cc93
	I0528 13:47:15.971710    4516 command_runner.go:130] > 421964bf5599
	I0528 13:47:15.972108    4516 command_runner.go:130] > 3c583f4b8962
	I0528 13:47:15.972108    4516 command_runner.go:130] > 788bf2870901
	I0528 13:47:15.972108    4516 command_runner.go:130] > a328e358e7d8
	I0528 13:47:15.972215    4516 command_runner.go:130] > 684e61568c39
	I0528 13:47:15.972215    4516 command_runner.go:130] > a6c2243527c9
	I0528 13:47:15.972215    4516 command_runner.go:130] > 9392d39bf7a1
	I0528 13:47:15.972215    4516 command_runner.go:130] > 097ed2bf877d
	I0528 13:47:15.972270    4516 command_runner.go:130] > aeb3b1fa0b4c
	I0528 13:47:15.972270    4516 command_runner.go:130] > 5cfea97ba889
	I0528 13:47:15.972270    4516 command_runner.go:130] > 6e66c448cf75
	I0528 13:47:15.972270    4516 command_runner.go:130] > 2fd08f1f3215
	I0528 13:47:15.972270    4516 command_runner.go:130] > b7c0ef99a550
	I0528 13:47:15.972307    4516 command_runner.go:130] > 799525f0c9d1
	I0528 13:47:15.972366    4516 command_runner.go:130] > 770e33bfd551
	I0528 13:47:15.972366    4516 command_runner.go:130] > be193c47e115
	I0528 13:47:15.972366    4516 command_runner.go:130] > 02cf6670d3cb
	I0528 13:47:15.972366    4516 command_runner.go:130] > a118279c1faa
	I0528 13:47:15.972451    4516 command_runner.go:130] > 2fe159cff253
	I0528 13:47:15.972509    4516 command_runner.go:130] > 1cc9a61a1710
	I0528 13:47:15.972509    4516 command_runner.go:130] > 32e4795f3077
	I0528 13:47:15.986245    4516 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 13:47:16.061977    4516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 13:47:16.082658    4516 command_runner.go:130] > -rw------- 1 root root 5647 May 28 20:44 /etc/kubernetes/admin.conf
	I0528 13:47:16.082658    4516 command_runner.go:130] > -rw------- 1 root root 5653 May 28 20:44 /etc/kubernetes/controller-manager.conf
	I0528 13:47:16.082658    4516 command_runner.go:130] > -rw------- 1 root root 2007 May 28 20:44 /etc/kubernetes/kubelet.conf
	I0528 13:47:16.082658    4516 command_runner.go:130] > -rw------- 1 root root 5601 May 28 20:44 /etc/kubernetes/scheduler.conf
	I0528 13:47:16.082658    4516 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May 28 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May 28 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May 28 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May 28 20:44 /etc/kubernetes/scheduler.conf
	
	I0528 13:47:16.095755    4516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0528 13:47:16.118505    4516 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0528 13:47:16.132217    4516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0528 13:47:16.152905    4516 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0528 13:47:16.166024    4516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0528 13:47:16.185580    4516 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 13:47:16.199232    4516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 13:47:16.228152    4516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0528 13:47:16.244110    4516 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 13:47:16.257373    4516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 13:47:16.289522    4516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 13:47:16.306529    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:16.402338    4516 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0528 13:47:16.402827    4516 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0528 13:47:16.402974    4516 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 13:47:16.402974    4516 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 13:47:16.402974    4516 command_runner.go:130] > [certs] Using the existing "sa" key
	I0528 13:47:16.402974    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:17.403082    4516 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 13:47:17.403209    4516 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0528 13:47:17.403209    4516 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0528 13:47:17.403209    4516 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0528 13:47:17.403209    4516 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 13:47:17.403301    4516 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 13:47:17.403301    4516 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0002112s)
	I0528 13:47:17.403374    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:17.728084    4516 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 13:47:17.728856    4516 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 13:47:17.728856    4516 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0528 13:47:17.728904    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:17.821743    4516 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 13:47:17.821801    4516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 13:47:17.821876    4516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 13:47:17.821876    4516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 13:47:17.821910    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:17.949059    4516 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 13:47:17.949890    4516 api_server.go:52] waiting for apiserver process to appear ...
	I0528 13:47:17.963175    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:18.476258    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:18.966071    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:19.478696    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:19.968178    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:19.995273    4516 command_runner.go:130] > 5633
	I0528 13:47:19.995848    4516 api_server.go:72] duration metric: took 2.0459831s to wait for apiserver process to appear ...
	I0528 13:47:19.995848    4516 api_server.go:88] waiting for apiserver healthz status ...
	I0528 13:47:19.995848    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:22.532981    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 13:47:22.533526    4516 api_server.go:103] status: https://172.18.211.78:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 13:47:22.533526    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:22.604067    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 13:47:22.604067    4516 api_server.go:103] status: https://172.18.211.78:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 13:47:23.004525    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:23.018466    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 13:47:23.018466    4516 api_server.go:103] status: https://172.18.211.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 13:47:23.495979    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:23.526478    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 13:47:23.526478    4516 api_server.go:103] status: https://172.18.211.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 13:47:24.006970    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:24.018857    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 13:47:24.018857    4516 api_server.go:103] status: https://172.18.211.78:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 13:47:24.499748    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:24.509787    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 200:
	ok
	I0528 13:47:24.510776    4516 round_trippers.go:463] GET https://172.18.211.78:8441/version
	I0528 13:47:24.510776    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:24.510776    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:24.510776    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:24.524594    4516 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0528 13:47:24.524594    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:24.524594    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:24.525591    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:24.525611    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:24.525611    4516 round_trippers.go:580]     Content-Length: 263
	I0528 13:47:24.525611    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:24 GMT
	I0528 13:47:24.525611    4516 round_trippers.go:580]     Audit-Id: 9e87fb73-4354-422c-88ed-461e82f9a8fa
	I0528 13:47:24.525611    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:24.525694    4516 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 13:47:24.525885    4516 api_server.go:141] control plane version: v1.30.1
	I0528 13:47:24.525902    4516 api_server.go:131] duration metric: took 4.5300464s to wait for apiserver health ...
	I0528 13:47:24.525902    4516 cni.go:84] Creating CNI manager for ""
	I0528 13:47:24.525955    4516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:47:24.528522    4516 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 13:47:24.543068    4516 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 13:47:24.560220    4516 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 13:47:24.598267    4516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 13:47:24.598647    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:24.598647    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:24.598647    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:24.598705    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:24.617100    4516 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0528 13:47:24.617868    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:24.617868    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:24.617950    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:24.617950    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:24 GMT
	I0528 13:47:24.617950    4516 round_trippers.go:580]     Audit-Id: 43da415b-1d5d-4e90-937f-1ecc7e82bd21
	I0528 13:47:24.617950    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:24.617950    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:24.619887    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"543"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"538","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52277 chars]
	I0528 13:47:24.624880    4516 system_pods.go:59] 7 kube-system pods found
	I0528 13:47:24.625444    4516 system_pods.go:61] "coredns-7db6d8ff4d-v57rj" [a1552367-c51a-4f98-ab64-a7f090b4f28b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 13:47:24.625485    4516 system_pods.go:61] "etcd-functional-200200" [a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 13:47:24.625485    4516 system_pods.go:61] "kube-apiserver-functional-200200" [1562eaca-402c-4134-9e2c-30da4d4a592e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 13:47:24.625485    4516 system_pods.go:61] "kube-controller-manager-functional-200200" [4044c46e-3c64-4904-8e8e-88dc2dcd9446] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 13:47:24.625485    4516 system_pods.go:61] "kube-proxy-rnt88" [a5657586-8c98-4a0a-9ef7-cac5630f1de6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 13:47:24.625485    4516 system_pods.go:61] "kube-scheduler-functional-200200" [5d092a1b-e22b-4786-9aea-7b0993dfde51] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 13:47:24.625485    4516 system_pods.go:61] "storage-provisioner" [8091a673-4937-4785-9a43-23a648f10e03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 13:47:24.625485    4516 system_pods.go:74] duration metric: took 27.2178ms to wait for pod list to return data ...
	I0528 13:47:24.625485    4516 node_conditions.go:102] verifying NodePressure condition ...
	I0528 13:47:24.625485    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes
	I0528 13:47:24.625485    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:24.625485    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:24.625485    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:24.631063    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:24.631063    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:24.631063    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:24 GMT
	I0528 13:47:24.631063    4516 round_trippers.go:580]     Audit-Id: 7e219df8-b3e4-46b2-82a3-b3bd26ce115b
	I0528 13:47:24.631063    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:24.631063    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:24.631063    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:24.631063    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:24.631063    4516 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"543"},"items":[{"metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0528 13:47:24.632078    4516 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 13:47:24.632078    4516 node_conditions.go:123] node cpu capacity is 2
	I0528 13:47:24.632078    4516 node_conditions.go:105] duration metric: took 6.5932ms to run NodePressure ...
	I0528 13:47:24.632078    4516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 13:47:25.048449    4516 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0528 13:47:25.048536    4516 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0528 13:47:25.048615    4516 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 13:47:25.048871    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0528 13:47:25.048935    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.048935    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.048935    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.053247    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:25.053247    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.053312    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.053312    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.053312    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.053312    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.053312    4516 round_trippers.go:580]     Audit-Id: 732bc524-0b47-4b1e-ac11-451a5c279d4d
	I0528 13:47:25.053312    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.054260    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"545"},"items":[{"metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31569 chars]
	I0528 13:47:25.055984    4516 kubeadm.go:733] kubelet initialised
	I0528 13:47:25.055984    4516 kubeadm.go:734] duration metric: took 7.3686ms waiting for restarted kubelet to initialise ...
	I0528 13:47:25.055984    4516 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:47:25.055984    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:25.056521    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.056569    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.056569    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.064672    4516 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 13:47:25.064672    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.064672    4516 round_trippers.go:580]     Audit-Id: 476bb53a-5150-4c1a-9cc6-373c9f977632
	I0528 13:47:25.064672    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.064672    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.064672    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.064672    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.064672    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.066180    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"545"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"538","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52277 chars]
	I0528 13:47:25.073431    4516 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:25.073431    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:25.073431    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.073431    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.073431    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.082410    4516 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 13:47:25.082841    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.082841    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.082841    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.082841    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.082841    4516 round_trippers.go:580]     Audit-Id: 4e0276c7-daa0-4e36-99a6-0738bcd01aa5
	I0528 13:47:25.082841    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.082841    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.083121    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"538","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6503 chars]
	I0528 13:47:25.083769    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:25.083825    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.083825    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.083883    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.087064    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:25.087064    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.087064    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.087183    4516 round_trippers.go:580]     Audit-Id: 2fadc8b1-d0a7-47e7-b8da-37386965acf3
	I0528 13:47:25.087183    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.087183    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.087183    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.087183    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.087861    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:25.575384    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:25.575384    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.575384    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.575384    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.578978    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:25.579879    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.579879    4516 round_trippers.go:580]     Audit-Id: ac455f92-03a8-4f17-b873-674e5769510e
	I0528 13:47:25.579879    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.579879    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.579879    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.579955    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.579995    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.580282    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"547","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0528 13:47:25.581131    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:25.581131    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:25.581131    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:25.581131    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:25.587343    4516 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 13:47:25.587343    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:25.587404    4516 round_trippers.go:580]     Audit-Id: ab409800-6fc5-44b2-a6d6-da798c3ef627
	I0528 13:47:25.587404    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:25.587404    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:25.587404    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:25.587404    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:25.587464    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:25 GMT
	I0528 13:47:25.587589    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:26.079200    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:26.079200    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:26.079200    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:26.079200    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:26.082764    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:26.083555    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:26.083627    4516 round_trippers.go:580]     Audit-Id: 87a94e29-a8dd-4dfd-a1ae-d6eba73b4dcf
	I0528 13:47:26.083661    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:26.083661    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:26.083661    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:26.083707    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:26.083707    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:26 GMT
	I0528 13:47:26.083873    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"547","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0528 13:47:26.084606    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:26.084642    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:26.084701    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:26.084701    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:26.086637    4516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 13:47:26.086637    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:26.086637    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:26.086637    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:26.086637    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:26.086637    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:26 GMT
	I0528 13:47:26.086637    4516 round_trippers.go:580]     Audit-Id: 8dd9da2f-f90d-4c69-b3d3-0e076c6f5963
	I0528 13:47:26.086637    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:26.087643    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:26.582323    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:26.582323    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:26.582509    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:26.582509    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:26.586758    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:26.586830    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:26.586830    4516 round_trippers.go:580]     Audit-Id: 73f29e4f-65d9-4cad-a865-edc2df4170c4
	I0528 13:47:26.586830    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:26.586830    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:26.586830    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:26.586830    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:26.586830    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:26 GMT
	I0528 13:47:26.587194    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"547","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0528 13:47:26.587432    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:26.587432    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:26.587432    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:26.587978    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:26.590176    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:26.590176    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:26.590492    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:26.590492    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:26.590492    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:26.590492    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:26 GMT
	I0528 13:47:26.590492    4516 round_trippers.go:580]     Audit-Id: 1fce9cdd-6429-4dc8-94c3-35f563d9ad0d
	I0528 13:47:26.590492    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:26.590821    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:27.080867    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:27.080986    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:27.081010    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:27.081010    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:27.086295    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:27.086295    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:27.086295    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:27.086498    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:27.086498    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:27.086498    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:27 GMT
	I0528 13:47:27.086498    4516 round_trippers.go:580]     Audit-Id: a7f4cc72-012b-4563-9929-e5b0a3278d06
	I0528 13:47:27.086498    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:27.086782    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"547","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0528 13:47:27.087235    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:27.087235    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:27.087235    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:27.087235    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:27.091023    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:27.091023    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:27.091023    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:27 GMT
	I0528 13:47:27.091023    4516 round_trippers.go:580]     Audit-Id: 458ba74e-f9a9-4b76-9d3a-f8f4cfecb83f
	I0528 13:47:27.091023    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:27.091150    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:27.091150    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:27.091150    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:27.091182    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:27.091764    4516 pod_ready.go:102] pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace has status "Ready":"False"
	I0528 13:47:27.579417    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:27.579417    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:27.579417    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:27.579417    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:27.584255    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:27.584255    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:27.584255    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:27 GMT
	I0528 13:47:27.584255    4516 round_trippers.go:580]     Audit-Id: 9e2d7b93-2e16-41cb-8760-753bc9289712
	I0528 13:47:27.584255    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:27.584508    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:27.584508    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:27.584508    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:27.584713    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"547","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6679 chars]
	I0528 13:47:27.585389    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:27.585552    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:27.585552    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:27.585552    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:27.590187    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:27.590321    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:27.590321    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:27 GMT
	I0528 13:47:27.590321    4516 round_trippers.go:580]     Audit-Id: bd621f16-3295-42ef-8997-d40d2d5e6382
	I0528 13:47:27.590321    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:27.590321    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:27.590321    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:27.590321    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:27.590321    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:28.081588    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:28.081621    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.081677    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.081677    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.095333    4516 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0528 13:47:28.095380    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.095380    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.095380    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.095380    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.095380    4516 round_trippers.go:580]     Audit-Id: 26e90410-7d88-4496-a594-03973b484f62
	I0528 13:47:28.095465    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.095465    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.095988    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"605","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0528 13:47:28.097103    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:28.097103    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.097103    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.097103    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.101139    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:28.101322    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.101322    4516 round_trippers.go:580]     Audit-Id: f1cb6086-ca4e-4af6-a86a-78b0c4aa4128
	I0528 13:47:28.101381    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.101381    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.101381    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.101381    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.101445    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.102033    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:28.102722    4516 pod_ready.go:92] pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:28.102771    4516 pod_ready.go:81] duration metric: took 3.0293354s for pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:28.102821    4516 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:28.102976    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:28.103020    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.103020    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.103075    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.107407    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:28.107805    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.107805    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.107805    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.107913    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.107913    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.107913    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.107913    4516 round_trippers.go:580]     Audit-Id: 4e1979ea-99e1-4f7a-8d7a-26118a295b01
	I0528 13:47:28.108143    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:28.109103    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:28.109103    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.109103    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.109103    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.113669    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:28.113729    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.113729    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.113729    4516 round_trippers.go:580]     Audit-Id: 331940f7-9ef9-49ad-86ce-df4854c540d4
	I0528 13:47:28.113801    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.113801    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.113801    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.113861    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.114515    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:28.615738    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:28.615805    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.615805    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.615805    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.620938    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:28.620938    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.620938    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.620938    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.620938    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.620938    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.620938    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.620938    4516 round_trippers.go:580]     Audit-Id: 5fe2e763-b171-4694-9c34-43d70e5acb22
	I0528 13:47:28.621248    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:28.622015    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:28.622069    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:28.622069    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:28.622069    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:28.624817    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:28.625430    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:28.625430    4516 round_trippers.go:580]     Audit-Id: cd266c96-727d-49dd-b207-8058e3aadebd
	I0528 13:47:28.625430    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:28.625430    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:28.625430    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:28.625514    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:28.625514    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:28 GMT
	I0528 13:47:28.625790    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:29.105018    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:29.105096    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:29.105191    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:29.105191    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:29.112360    4516 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 13:47:29.112360    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:29.112360    4516 round_trippers.go:580]     Audit-Id: 76580c5f-aa82-4dde-ac3a-b22987045591
	I0528 13:47:29.112360    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:29.112360    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:29.112360    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:29.112360    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:29.112360    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:29 GMT
	I0528 13:47:29.112978    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:29.113670    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:29.113670    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:29.113670    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:29.113670    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:29.116933    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:29.116933    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:29.116933    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:29 GMT
	I0528 13:47:29.116933    4516 round_trippers.go:580]     Audit-Id: cf155bce-48f8-402b-ac40-2d9c80c2e079
	I0528 13:47:29.116933    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:29.116933    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:29.116933    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:29.116933    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:29.117541    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:29.604176    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:29.604239    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:29.604239    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:29.604239    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:29.608508    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:29.608792    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:29.608792    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:29.608792    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:29 GMT
	I0528 13:47:29.608792    4516 round_trippers.go:580]     Audit-Id: f4190553-dfe1-4012-ad4e-25fe00738b16
	I0528 13:47:29.608792    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:29.608792    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:29.608792    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:29.609098    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:29.609774    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:29.609774    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:29.609774    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:29.609774    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:29.614116    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:29.614436    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:29.614436    4516 round_trippers.go:580]     Audit-Id: 1cf9ec8a-fbaf-49b6-ac65-cbe1e9e8b202
	I0528 13:47:29.614436    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:29.614436    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:29.614436    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:29.614436    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:29.614436    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:29 GMT
	I0528 13:47:29.614436    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:30.104140    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:30.104140    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:30.104140    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:30.104140    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:30.108973    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:30.109011    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:30.109011    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:30.109011    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:30.109011    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:30.109097    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:30.109097    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:30 GMT
	I0528 13:47:30.109117    4516 round_trippers.go:580]     Audit-Id: d6222bf9-9177-4ffe-a628-cc10028390a3
	I0528 13:47:30.109318    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:30.110131    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:30.110131    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:30.110131    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:30.110131    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:30.113395    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:30.113395    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:30.113395    4516 round_trippers.go:580]     Audit-Id: e9c70644-cfb8-4704-8a21-8947db61f721
	I0528 13:47:30.113395    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:30.113930    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:30.113930    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:30.113982    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:30.114004    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:30 GMT
	I0528 13:47:30.114136    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:30.114670    4516 pod_ready.go:102] pod "etcd-functional-200200" in "kube-system" namespace has status "Ready":"False"
	I0528 13:47:30.604288    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:30.604288    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:30.604288    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:30.604288    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:30.609385    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:30.609385    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:30.609385    4516 round_trippers.go:580]     Audit-Id: b4318009-cba7-4413-aa77-8c78cb5f2c75
	I0528 13:47:30.609385    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:30.609385    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:30.609385    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:30.609385    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:30.609385    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:30 GMT
	I0528 13:47:30.609385    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:30.610453    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:30.610513    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:30.610513    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:30.610513    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:30.613988    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:30.613988    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:30.614292    4516 round_trippers.go:580]     Audit-Id: 735ee9cd-c835-4cbb-84f3-80b2f7176e59
	I0528 13:47:30.614292    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:30.614292    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:30.614292    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:30.614292    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:30.614292    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:30 GMT
	I0528 13:47:30.614525    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:31.117757    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:31.117835    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:31.117835    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:31.117835    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:31.126448    4516 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 13:47:31.126779    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:31.126779    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:31.126779    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:31.126779    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:31 GMT
	I0528 13:47:31.126779    4516 round_trippers.go:580]     Audit-Id: 828f29dd-37c0-4741-8cd8-031c350a37e0
	I0528 13:47:31.126779    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:31.126779    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:31.126937    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:31.127743    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:31.127743    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:31.127800    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:31.127800    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:31.130598    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:31.130598    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:31.130598    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:31.130598    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:31.130598    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:31.130598    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:31 GMT
	I0528 13:47:31.130598    4516 round_trippers.go:580]     Audit-Id: 21a8d74c-e422-4b2a-b71c-33451b6e1871
	I0528 13:47:31.130598    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:31.130598    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:31.603825    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:31.603825    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:31.603825    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:31.603825    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:31.607383    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:31.607744    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:31.607744    4516 round_trippers.go:580]     Audit-Id: 5a07fafa-fc5a-435c-a8e3-0b2b7cd86671
	I0528 13:47:31.607744    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:31.607744    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:31.607744    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:31.607744    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:31.607817    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:31 GMT
	I0528 13:47:31.608014    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:31.608595    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:31.608595    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:31.608595    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:31.608595    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:31.611710    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:31.611710    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:31.611710    4516 round_trippers.go:580]     Audit-Id: ad5433de-011d-454f-9989-5abf5cc12cae
	I0528 13:47:31.612062    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:31.612062    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:31.612062    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:31.612062    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:31.612062    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:31 GMT
	I0528 13:47:31.612181    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:32.118342    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:32.118342    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:32.118342    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:32.118431    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:32.122506    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:32.122506    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:32.122804    4516 round_trippers.go:580]     Audit-Id: 8919433b-8951-45a8-aeb9-02950b87af93
	I0528 13:47:32.122896    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:32.122896    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:32.122896    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:32.122896    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:32.122896    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:32 GMT
	I0528 13:47:32.123132    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:32.123876    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:32.123876    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:32.123876    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:32.123876    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:32.129399    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:32.129399    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:32.129399    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:32.129399    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:32.129399    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:32.129399    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:32 GMT
	I0528 13:47:32.129399    4516 round_trippers.go:580]     Audit-Id: c991e679-c333-46ea-862e-264fe8423f38
	I0528 13:47:32.129399    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:32.130235    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:32.130235    4516 pod_ready.go:102] pod "etcd-functional-200200" in "kube-system" namespace has status "Ready":"False"
	I0528 13:47:32.618118    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:32.618268    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:32.618268    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:32.618268    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:32.626366    4516 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 13:47:32.626366    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:32.626366    4516 round_trippers.go:580]     Audit-Id: 8c8bc13f-0ba7-4740-af81-70b64da2ac91
	I0528 13:47:32.626366    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:32.626366    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:32.626366    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:32.626366    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:32.626366    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:32 GMT
	I0528 13:47:32.626366    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:32.627283    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:32.627283    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:32.627283    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:32.627283    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:32.630061    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:32.630061    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:32.630061    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:32.630061    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:32 GMT
	I0528 13:47:32.630061    4516 round_trippers.go:580]     Audit-Id: acc4ba73-46a3-4c64-9f36-64b3f1f209b8
	I0528 13:47:32.630061    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:32.630061    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:32.630061    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:32.630061    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:33.117535    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:33.117617    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:33.117617    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:33.117726    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:33.121047    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:33.121766    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:33.121766    4516 round_trippers.go:580]     Audit-Id: edd7f946-d50d-4bca-b68b-b218d68bce90
	I0528 13:47:33.121766    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:33.121766    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:33.121766    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:33.121766    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:33.121766    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:33 GMT
	I0528 13:47:33.122118    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:33.122522    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:33.122522    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:33.122522    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:33.122522    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:33.126053    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:33.126125    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:33.126125    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:33 GMT
	I0528 13:47:33.126125    4516 round_trippers.go:580]     Audit-Id: 2b6a3bc2-6df5-4226-8059-985b00bda3b6
	I0528 13:47:33.126125    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:33.126199    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:33.126199    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:33.126199    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:33.126524    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:33.614272    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:33.614272    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:33.614272    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:33.614272    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:33.619253    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:33.619253    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:33.619253    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:33 GMT
	I0528 13:47:33.619253    4516 round_trippers.go:580]     Audit-Id: d9f58fee-cd76-4859-9a65-62618c763702
	I0528 13:47:33.619253    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:33.619253    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:33.619253    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:33.619253    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:33.619253    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:33.620393    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:33.620455    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:33.620455    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:33.620455    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:33.623878    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:33.623878    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:33.623878    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:33.623878    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:33 GMT
	I0528 13:47:33.623878    4516 round_trippers.go:580]     Audit-Id: aca906af-095a-4865-8685-349bb765c4d2
	I0528 13:47:33.623878    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:33.623878    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:33.623878    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:33.624660    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:34.116659    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:34.116659    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.116659    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.116659    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.121697    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:34.121697    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.121697    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.121697    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.121697    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.121697    4516 round_trippers.go:580]     Audit-Id: b6c91190-6288-4280-bbbb-38eb049436b6
	I0528 13:47:34.121697    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.122710    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.122913    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"537","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6819 chars]
	I0528 13:47:34.123646    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:34.123646    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.123718    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.123718    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.126028    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:34.126028    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.126028    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.126028    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.126028    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.126028    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.126028    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.126028    4516 round_trippers.go:580]     Audit-Id: 85646031-cc2b-4e2e-b7bd-c8945656a132
	I0528 13:47:34.127212    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:34.604124    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:34.604199    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.604199    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.604199    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.610676    4516 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 13:47:34.610846    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.610846    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.610846    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.610898    4516 round_trippers.go:580]     Audit-Id: 71998044-5691-4ab0-843b-6cc5588e5a81
	I0528 13:47:34.610912    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.610912    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.610912    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.611525    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"612","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6595 chars]
	I0528 13:47:34.612537    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:34.612537    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.612537    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.612537    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.614905    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:34.614905    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.614905    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.615724    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.615724    4516 round_trippers.go:580]     Audit-Id: 13b8c4b0-bc86-4b6c-80bf-4297c248a5f0
	I0528 13:47:34.615760    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.615760    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.615760    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.615939    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:34.615939    4516 pod_ready.go:92] pod "etcd-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:34.615939    4516 pod_ready.go:81] duration metric: took 6.5131083s for pod "etcd-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:34.615939    4516 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:34.616470    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-200200
	I0528 13:47:34.616470    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.616470    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.616616    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.619533    4516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 13:47:34.619533    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.619533    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.619533    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.619533    4516 round_trippers.go:580]     Audit-Id: cb8258c4-5a0d-4d92-ac01-6431c97d2cef
	I0528 13:47:34.619533    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.619533    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.619533    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.619533    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-200200","namespace":"kube-system","uid":"1562eaca-402c-4134-9e2c-30da4d4a592e","resourceVersion":"607","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.211.78:8441","kubernetes.io/config.hash":"ecf535b188f0bbbe068dac0d93a34e49","kubernetes.io/config.mirror":"ecf535b188f0bbbe068dac0d93a34e49","kubernetes.io/config.seen":"2024-05-28T20:44:34.013696900Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8042 chars]
	I0528 13:47:34.619533    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:34.619533    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.620354    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.620354    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.623571    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:34.623571    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.623571    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.623571    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.623571    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.623571    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.623571    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.623571    4516 round_trippers.go:580]     Audit-Id: 9923d880-11db-4982-839a-e09735c03e75
	I0528 13:47:34.623571    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:34.623571    4516 pod_ready.go:92] pod "kube-apiserver-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:34.623571    4516 pod_ready.go:81] duration metric: took 7.101ms for pod "kube-apiserver-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:34.623571    4516 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:34.623571    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:34.623571    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.623571    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.623571    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.627487    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:34.627487    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.627487    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.627487    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.627487    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.627700    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.627700    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.627700    4516 round_trippers.go:580]     Audit-Id: babd441f-5a59-441c-bb3d-80b56a3fb54c
	I0528 13:47:34.628064    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:34.628746    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:34.628800    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:34.628800    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:34.628800    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:34.630493    4516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 13:47:34.631323    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:34.631323    4516 round_trippers.go:580]     Audit-Id: fbf18893-0ae7-4822-936e-43d173362b4b
	I0528 13:47:34.631323    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:34.631323    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:34.631389    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:34.631389    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:34.631389    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:34 GMT
	I0528 13:47:34.631389    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:35.136100    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:35.136100    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:35.136100    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:35.136100    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:35.140248    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:35.141010    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:35.141010    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:35.141010    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:35.141010    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:35 GMT
	I0528 13:47:35.141010    4516 round_trippers.go:580]     Audit-Id: c8175808-c6e9-4c49-ab7c-5c54a31a6871
	I0528 13:47:35.141010    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:35.141142    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:35.141652    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:35.142642    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:35.142642    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:35.142740    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:35.142740    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:35.146344    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:35.146344    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:35.146344    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:35 GMT
	I0528 13:47:35.146344    4516 round_trippers.go:580]     Audit-Id: b8c8adba-6141-4bae-a6fb-aede2946f3ec
	I0528 13:47:35.146344    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:35.146344    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:35.146344    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:35.146344    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:35.146344    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:35.639916    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:35.639972    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:35.640027    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:35.640027    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:35.643739    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:35.643739    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:35.643739    4516 round_trippers.go:580]     Audit-Id: 5608868c-edd6-4862-bf3d-f403ac9d61ba
	I0528 13:47:35.643739    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:35.643739    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:35.643739    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:35.644315    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:35.644315    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:35 GMT
	I0528 13:47:35.644450    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:35.645392    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:35.645392    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:35.645392    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:35.645392    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:35.648843    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:35.648843    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:35.648984    4516 round_trippers.go:580]     Audit-Id: fcf24df2-6183-45b1-9778-70fdd8e3ba30
	I0528 13:47:35.649006    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:35.649006    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:35.649006    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:35.649006    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:35.649006    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:35 GMT
	I0528 13:47:35.650667    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:36.131416    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:36.131416    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:36.131416    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:36.131416    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:36.136447    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:36.136528    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:36.136528    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:36 GMT
	I0528 13:47:36.136528    4516 round_trippers.go:580]     Audit-Id: 2a038ab4-3100-461d-bfed-383da221af32
	I0528 13:47:36.136528    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:36.136528    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:36.136528    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:36.136528    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:36.136878    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:36.137834    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:36.137887    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:36.137887    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:36.137887    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:36.141032    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:36.141116    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:36.141116    4516 round_trippers.go:580]     Audit-Id: dd2365f0-8cbd-47ff-8393-47b21bed82fb
	I0528 13:47:36.141116    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:36.141116    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:36.141116    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:36.141116    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:36.141116    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:36 GMT
	I0528 13:47:36.141693    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:36.633548    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:36.633651    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:36.633651    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:36.633651    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:36.636458    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:36.636458    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:36.636458    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:36.637356    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:36.637356    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:36.637356    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:36 GMT
	I0528 13:47:36.637356    4516 round_trippers.go:580]     Audit-Id: c24c08a1-8ccb-4d07-a24e-70d9186b5446
	I0528 13:47:36.637396    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:36.637748    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:36.638505    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:36.638505    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:36.638505    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:36.638505    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:36.641208    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:36.641208    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:36.641634    4516 round_trippers.go:580]     Audit-Id: 1b2fc8f3-bf86-45c1-8943-1cb71abc0aad
	I0528 13:47:36.641634    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:36.641634    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:36.641634    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:36.641634    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:36.641634    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:36 GMT
	I0528 13:47:36.641876    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:36.642462    4516 pod_ready.go:102] pod "kube-controller-manager-functional-200200" in "kube-system" namespace has status "Ready":"False"
	I0528 13:47:37.137805    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:37.138188    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:37.138188    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:37.138188    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:37.141913    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:37.141913    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:37.141913    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:37.141913    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:37.142380    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:37.142380    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:37 GMT
	I0528 13:47:37.142380    4516 round_trippers.go:580]     Audit-Id: 4becc6ce-ece5-416d-8da2-5166ab77c0df
	I0528 13:47:37.142380    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:37.143036    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:37.144150    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:37.144265    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:37.144265    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:37.144385    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:37.146726    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:37.146726    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:37.146726    4516 round_trippers.go:580]     Audit-Id: 622752b6-e850-4e9e-b62c-ef283c34f192
	I0528 13:47:37.146726    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:37.146726    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:37.146726    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:37.146726    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:37.146726    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:37 GMT
	I0528 13:47:37.147721    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:37.635486    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:37.635539    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:37.635611    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:37.635611    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:37.639258    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:37.639258    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:37.639258    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:37.639258    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:37.639258    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:37.639258    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:37 GMT
	I0528 13:47:37.639258    4516 round_trippers.go:580]     Audit-Id: 194938dd-524c-417f-99fc-1965b74234f9
	I0528 13:47:37.639559    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:37.640080    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:37.640327    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:37.640327    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:37.640327    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:37.640875    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:37.643564    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:37.643853    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:37.643853    4516 round_trippers.go:580]     Audit-Id: d82f1465-d058-461d-951a-4585b1fbec6a
	I0528 13:47:37.643853    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:37.643853    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:37.643853    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:37.643853    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:37.643853    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:37 GMT
	I0528 13:47:37.644158    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:38.136087    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:38.136087    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:38.136168    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:38.136168    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:38.140121    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:38.140557    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:38.140557    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:38 GMT
	I0528 13:47:38.140557    4516 round_trippers.go:580]     Audit-Id: 4badcc92-7e5a-4aa2-a50e-807a92028579
	I0528 13:47:38.140658    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:38.140658    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:38.140658    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:38.140658    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:38.140658    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:38.141924    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:38.141924    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:38.141962    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:38.141962    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:38.147317    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:38.147317    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:38.147317    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:38.147317    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:38.147317    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:38.147317    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:38 GMT
	I0528 13:47:38.147317    4516 round_trippers.go:580]     Audit-Id: 711e9d43-3f2c-443a-b0b8-a7a89e306048
	I0528 13:47:38.147317    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:38.147853    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:38.638157    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:38.638157    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:38.638157    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:38.638157    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:38.642429    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:38.642429    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:38.642719    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:38.642719    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:38.642719    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:38 GMT
	I0528 13:47:38.642719    4516 round_trippers.go:580]     Audit-Id: 7e5cb91c-7887-4148-b1a5-b0461ea6244a
	I0528 13:47:38.642719    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:38.642719    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:38.643234    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"534","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7734 chars]
	I0528 13:47:38.644051    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:38.644051    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:38.644103    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:38.644103    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:38.648050    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:38.648050    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:38.648241    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:38.648263    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:38 GMT
	I0528 13:47:38.648263    4516 round_trippers.go:580]     Audit-Id: 738d3c29-ecde-47bc-b0f9-231e26823255
	I0528 13:47:38.648263    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:38.648263    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:38.648296    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:38.648455    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:38.648455    4516 pod_ready.go:102] pod "kube-controller-manager-functional-200200" in "kube-system" namespace has status "Ready":"False"
	I0528 13:47:39.124302    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:39.124302    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.124302    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.124397    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.128337    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:39.128377    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.128377    4516 round_trippers.go:580]     Audit-Id: a93f055b-06ba-4aa7-bb03-3a789b27d0f2
	I0528 13:47:39.128377    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.128377    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.128488    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.128526    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.128526    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.128791    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"619","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0528 13:47:39.130342    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.130342    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.130342    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.130342    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.135599    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:39.135599    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.135693    4516 round_trippers.go:580]     Audit-Id: d78f9366-ce46-48f7-a27a-62216a85955a
	I0528 13:47:39.135693    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.135693    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.135693    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.135693    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.135693    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.135826    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.136743    4516 pod_ready.go:92] pod "kube-controller-manager-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.136807    4516 pod_ready.go:81] duration metric: took 4.513229s for pod "kube-controller-manager-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.136807    4516 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rnt88" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.136940    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rnt88
	I0528 13:47:39.136940    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.136940    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.136940    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.139910    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:39.139997    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.140019    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.140019    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.140019    4516 round_trippers.go:580]     Audit-Id: dcf45e84-50aa-468a-b177-3fc3853c9ec5
	I0528 13:47:39.140019    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.140019    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.140019    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.140019    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnt88","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5657586-8c98-4a0a-9ef7-cac5630f1de6","resourceVersion":"546","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e885376-eb95-4ec9-badd-dc28413993a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e885376-eb95-4ec9-badd-dc28413993a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0528 13:47:39.140943    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.141008    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.141008    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.141008    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.144203    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:39.144203    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.144203    4516 round_trippers.go:580]     Audit-Id: 4caeb016-7421-4e82-9016-52601a02663b
	I0528 13:47:39.144203    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.144203    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.144203    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.144203    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.144203    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.145210    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.145469    4516 pod_ready.go:92] pod "kube-proxy-rnt88" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.145469    4516 pod_ready.go:81] duration metric: took 8.6615ms for pod "kube-proxy-rnt88" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.145469    4516 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.145469    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-200200
	I0528 13:47:39.145469    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.145469    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.145469    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.148074    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:39.148074    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.148074    4516 round_trippers.go:580]     Audit-Id: 8c0aa4b7-23ea-4a88-b919-a8245584be1e
	I0528 13:47:39.148074    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.148074    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.148074    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.148074    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.148074    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.149399    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-200200","namespace":"kube-system","uid":"5d092a1b-e22b-4786-9aea-7b0993dfde51","resourceVersion":"618","creationTimestamp":"2024-05-28T20:44:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9accbf9613e7defc6f31b48983967075","kubernetes.io/config.mirror":"9accbf9613e7defc6f31b48983967075","kubernetes.io/config.seen":"2024-05-28T20:44:34.013698799Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0528 13:47:39.149399    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.149399    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.149399    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.149399    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.152280    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:39.152495    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.152495    4516 round_trippers.go:580]     Audit-Id: 4dd43d4f-cd77-46fc-9152-c73490bcbe94
	I0528 13:47:39.152495    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.152495    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.152495    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.152495    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.152495    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.152704    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.153197    4516 pod_ready.go:92] pod "kube-scheduler-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.153284    4516 pod_ready.go:81] duration metric: took 7.8154ms for pod "kube-scheduler-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.153284    4516 pod_ready.go:38] duration metric: took 14.097279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:47:39.153284    4516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 13:47:39.172584    4516 command_runner.go:130] > -16
	I0528 13:47:39.172584    4516 ops.go:34] apiserver oom_adj: -16
	I0528 13:47:39.172584    4516 kubeadm.go:591] duration metric: took 24.1800283s to restartPrimaryControlPlane
	I0528 13:47:39.172584    4516 kubeadm.go:393] duration metric: took 24.2822069s to StartCluster
	I0528 13:47:39.172584    4516 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:47:39.172584    4516 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:47:39.173920    4516 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:47:39.175791    4516 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.211.78 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 13:47:39.175791    4516 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 13:47:39.179864    4516 out.go:177] * Verifying Kubernetes components...
	I0528 13:47:39.175791    4516 addons.go:69] Setting storage-provisioner=true in profile "functional-200200"
	I0528 13:47:39.175791    4516 addons.go:69] Setting default-storageclass=true in profile "functional-200200"
	I0528 13:47:39.175791    4516 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:47:39.180044    4516 addons.go:234] Setting addon storage-provisioner=true in "functional-200200"
	W0528 13:47:39.182585    4516 addons.go:243] addon storage-provisioner should already be in state true
	I0528 13:47:39.180044    4516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-200200"
	I0528 13:47:39.182773    4516 host.go:66] Checking if "functional-200200" exists ...
	I0528 13:47:39.183675    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:47:39.183675    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:47:39.196489    4516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 13:47:39.506481    4516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 13:47:39.531495    4516 node_ready.go:35] waiting up to 6m0s for node "functional-200200" to be "Ready" ...
	I0528 13:47:39.532479    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.532479    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.532479    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.532479    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.537226    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:39.537226    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.537226    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.537226    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.537226    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.537226    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.537226    4516 round_trippers.go:580]     Audit-Id: 99ed05ae-a92d-4097-a746-7616c2dfd67b
	I0528 13:47:39.537226    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.537226    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.537850    4516 node_ready.go:49] node "functional-200200" has status "Ready":"True"
	I0528 13:47:39.537850    4516 node_ready.go:38] duration metric: took 6.3548ms for node "functional-200200" to be "Ready" ...
	I0528 13:47:39.537850    4516 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:47:39.537850    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:39.537850    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.537850    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.537850    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.542426    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:39.542426    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.542775    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.542775    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.542775    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.542775    4516 round_trippers.go:580]     Audit-Id: c0ea5485-eba3-4b07-8ada-6188ea56b25c
	I0528 13:47:39.542775    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.542775    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.544846    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"619"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"605","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51104 chars]
	I0528 13:47:39.547095    4516 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.547095    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-v57rj
	I0528 13:47:39.547095    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.547095    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.547095    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.549660    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:39.549660    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.549660    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.549660    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.549660    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.549660    4516 round_trippers.go:580]     Audit-Id: bbc78550-fb80-4e86-82a4-2675847e2212
	I0528 13:47:39.549660    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.549660    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.549660    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"605","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6450 chars]
	I0528 13:47:39.550653    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.550653    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.550653    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.550653    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.554690    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:39.554690    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.554690    4516 round_trippers.go:580]     Audit-Id: 5ff3bcdc-9091-40d6-be11-7029aa78d78d
	I0528 13:47:39.555299    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.555299    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.555299    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.555350    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.555350    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.555808    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.556361    4516 pod_ready.go:92] pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.556439    4516 pod_ready.go:81] duration metric: took 9.3437ms for pod "coredns-7db6d8ff4d-v57rj" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.556527    4516 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.556599    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/etcd-functional-200200
	I0528 13:47:39.556672    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.556672    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.556672    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.561409    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:39.561409    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.561409    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.561409    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.561409    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.561409    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.561409    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.561409    4516 round_trippers.go:580]     Audit-Id: e5baa793-84f6-4f1f-8181-773f4f984c6f
	I0528 13:47:39.561409    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-200200","namespace":"kube-system","uid":"a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7","resourceVersion":"612","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.211.78:2379","kubernetes.io/config.hash":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.mirror":"656d082e82aa61fe8809f24da8f979c4","kubernetes.io/config.seen":"2024-05-28T20:44:41.567684566Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6595 chars]
	I0528 13:47:39.561409    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.561409    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.561409    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.561409    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.564403    4516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 13:47:39.564403    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.564403    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.564403    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.564403    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.564403    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.564403    4516 round_trippers.go:580]     Audit-Id: f49bb08d-2257-46a2-9793-4e4de893769b
	I0528 13:47:39.564403    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.565396    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.565396    4516 pod_ready.go:92] pod "etcd-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.565396    4516 pod_ready.go:81] duration metric: took 8.8681ms for pod "etcd-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.565396    4516 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.729353    4516 request.go:629] Waited for 163.6704ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-200200
	I0528 13:47:39.729487    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-200200
	I0528 13:47:39.729554    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.729554    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.729554    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.735341    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:39.735395    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.735395    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.735395    4516 round_trippers.go:580]     Audit-Id: 11777752-8a24-4c33-8512-cd7908540e68
	I0528 13:47:39.735486    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.735486    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.735486    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.735486    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.735867    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-200200","namespace":"kube-system","uid":"1562eaca-402c-4134-9e2c-30da4d4a592e","resourceVersion":"607","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.211.78:8441","kubernetes.io/config.hash":"ecf535b188f0bbbe068dac0d93a34e49","kubernetes.io/config.mirror":"ecf535b188f0bbbe068dac0d93a34e49","kubernetes.io/config.seen":"2024-05-28T20:44:34.013696900Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8042 chars]
	I0528 13:47:39.934613    4516 request.go:629] Waited for 197.7352ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.934717    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:39.934864    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:39.934864    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:39.934948    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:39.938743    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:39.939552    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:39.939552    4516 round_trippers.go:580]     Audit-Id: 0b5d6590-1664-4810-9a8b-b7cf8b84484c
	I0528 13:47:39.939552    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:39.939552    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:39.939552    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:39.939552    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:39.939552    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:39 GMT
	I0528 13:47:39.939889    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:39.939973    4516 pod_ready.go:92] pod "kube-apiserver-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:39.939973    4516 pod_ready.go:81] duration metric: took 374.5765ms for pod "kube-apiserver-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:39.939973    4516 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:40.125842    4516 request.go:629] Waited for 185.8688ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:40.126175    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-200200
	I0528 13:47:40.126238    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:40.126238    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:40.126238    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:40.135139    4516 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 13:47:40.135139    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:40.135139    4516 round_trippers.go:580]     Audit-Id: 381e5765-b4b6-4d0b-b5ad-6706328e445b
	I0528 13:47:40.135139    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:40.135139    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:40.135139    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:40.135139    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:40.135139    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:40 GMT
	I0528 13:47:40.135139    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-200200","namespace":"kube-system","uid":"4044c46e-3c64-4904-8e8e-88dc2dcd9446","resourceVersion":"619","creationTimestamp":"2024-05-28T20:44:41Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.mirror":"1cc5a4528b10c2379bb2e1b42d1ef2a0","kubernetes.io/config.seen":"2024-05-28T20:44:41.567690863Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7472 chars]
	I0528 13:47:40.333570    4516 request.go:629] Waited for 197.3143ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:40.333570    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:40.333570    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:40.333903    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:40.333903    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:40.337274    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:40.337700    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:40.337700    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:40 GMT
	I0528 13:47:40.337700    4516 round_trippers.go:580]     Audit-Id: 8aed6fe8-cf47-4b4a-92dd-e4aa789c18dd
	I0528 13:47:40.337700    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:40.337700    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:40.337700    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:40.337700    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:40.337876    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:40.338459    4516 pod_ready.go:92] pod "kube-controller-manager-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:40.338459    4516 pod_ready.go:81] duration metric: took 398.4853ms for pod "kube-controller-manager-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:40.338459    4516 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rnt88" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:40.524372    4516 request.go:629] Waited for 185.8033ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rnt88
	I0528 13:47:40.524926    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-proxy-rnt88
	I0528 13:47:40.524926    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:40.524926    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:40.524926    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:40.528570    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:40.529487    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:40.529487    4516 round_trippers.go:580]     Audit-Id: ba6ec347-9626-4f6c-94ce-ea98db9381ea
	I0528 13:47:40.529487    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:40.529487    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:40.529487    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:40.529487    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:40.529487    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:40 GMT
	I0528 13:47:40.529487    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rnt88","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5657586-8c98-4a0a-9ef7-cac5630f1de6","resourceVersion":"546","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1e885376-eb95-4ec9-badd-dc28413993a1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e885376-eb95-4ec9-badd-dc28413993a1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0528 13:47:40.729968    4516 request.go:629] Waited for 199.3707ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:40.730095    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:40.730095    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:40.730095    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:40.730095    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:40.735931    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:40.735931    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:40.735931    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:40.735931    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:40 GMT
	I0528 13:47:40.735931    4516 round_trippers.go:580]     Audit-Id: 75007165-a0b3-4938-a58c-d2c018127349
	I0528 13:47:40.735931    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:40.735931    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:40.735931    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:40.736491    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:40.736751    4516 pod_ready.go:92] pod "kube-proxy-rnt88" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:40.736751    4516 pod_ready.go:81] duration metric: took 398.2919ms for pod "kube-proxy-rnt88" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:40.736751    4516 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:40.936832    4516 request.go:629] Waited for 200.0806ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-200200
	I0528 13:47:40.936832    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-200200
	I0528 13:47:40.936832    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:40.936832    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:40.936832    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:40.943912    4516 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 13:47:40.944699    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:40.944699    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:40.944699    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:40.944699    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:40.944699    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:40 GMT
	I0528 13:47:40.944699    4516 round_trippers.go:580]     Audit-Id: ef40d688-2bc5-4a60-9b87-58137323ffda
	I0528 13:47:40.944699    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:40.944904    4516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-200200","namespace":"kube-system","uid":"5d092a1b-e22b-4786-9aea-7b0993dfde51","resourceVersion":"618","creationTimestamp":"2024-05-28T20:44:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9accbf9613e7defc6f31b48983967075","kubernetes.io/config.mirror":"9accbf9613e7defc6f31b48983967075","kubernetes.io/config.seen":"2024-05-28T20:44:34.013698799Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0528 13:47:41.127774    4516 request.go:629] Waited for 182.3078ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:41.127774    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes/functional-200200
	I0528 13:47:41.127774    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.127774    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.127774    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.134777    4516 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 13:47:41.135237    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.135237    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.135237    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.135237    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.135237    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.135237    4516 round_trippers.go:580]     Audit-Id: ff49e274-cd5a-437b-ba57-72127ed80a0b
	I0528 13:47:41.135237    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.135506    4516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-28T20:44:37Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0528 13:47:41.135957    4516 pod_ready.go:92] pod "kube-scheduler-functional-200200" in "kube-system" namespace has status "Ready":"True"
	I0528 13:47:41.136023    4516 pod_ready.go:81] duration metric: took 399.2713ms for pod "kube-scheduler-functional-200200" in "kube-system" namespace to be "Ready" ...
	I0528 13:47:41.136023    4516 pod_ready.go:38] duration metric: took 1.5981707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 13:47:41.136119    4516 api_server.go:52] waiting for apiserver process to appear ...
	I0528 13:47:41.149031    4516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 13:47:41.180967    4516 command_runner.go:130] > 5633
	I0528 13:47:41.181668    4516 api_server.go:72] duration metric: took 2.0058738s to wait for apiserver process to appear ...
	I0528 13:47:41.181668    4516 api_server.go:88] waiting for apiserver healthz status ...
	I0528 13:47:41.181788    4516 api_server.go:253] Checking apiserver healthz at https://172.18.211.78:8441/healthz ...
	I0528 13:47:41.196330    4516 api_server.go:279] https://172.18.211.78:8441/healthz returned 200:
	ok
	I0528 13:47:41.196330    4516 round_trippers.go:463] GET https://172.18.211.78:8441/version
	I0528 13:47:41.196330    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.196330    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.196330    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.201606    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:41.201606    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.201606    4516 round_trippers.go:580]     Content-Length: 263
	I0528 13:47:41.201606    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.201606    4516 round_trippers.go:580]     Audit-Id: 31664394-fc76-4573-af00-1519959de8f2
	I0528 13:47:41.201606    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.201606    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.201606    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.201606    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.201606    4516 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 13:47:41.201606    4516 api_server.go:141] control plane version: v1.30.1
	I0528 13:47:41.201606    4516 api_server.go:131] duration metric: took 19.938ms to wait for apiserver health ...
	I0528 13:47:41.201606    4516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 13:47:41.332388    4516 request.go:629] Waited for 130.5913ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:41.332667    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:41.332730    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.332730    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.332730    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.340178    4516 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 13:47:41.340178    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.340178    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.340178    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.340178    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.340178    4516 round_trippers.go:580]     Audit-Id: 5480b0e5-1ef3-4266-bfe6-f300a31751c2
	I0528 13:47:41.340178    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.340178    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.341273    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"622"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"605","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51104 chars]
	I0528 13:47:41.344370    4516 system_pods.go:59] 7 kube-system pods found
	I0528 13:47:41.344370    4516 system_pods.go:61] "coredns-7db6d8ff4d-v57rj" [a1552367-c51a-4f98-ab64-a7f090b4f28b] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "etcd-functional-200200" [a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "kube-apiserver-functional-200200" [1562eaca-402c-4134-9e2c-30da4d4a592e] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "kube-controller-manager-functional-200200" [4044c46e-3c64-4904-8e8e-88dc2dcd9446] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "kube-proxy-rnt88" [a5657586-8c98-4a0a-9ef7-cac5630f1de6] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "kube-scheduler-functional-200200" [5d092a1b-e22b-4786-9aea-7b0993dfde51] Running
	I0528 13:47:41.344370    4516 system_pods.go:61] "storage-provisioner" [8091a673-4937-4785-9a43-23a648f10e03] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 13:47:41.344370    4516 system_pods.go:74] duration metric: took 142.7643ms to wait for pod list to return data ...
	I0528 13:47:41.344370    4516 default_sa.go:34] waiting for default service account to be created ...
	I0528 13:47:41.420129    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:47:41.420129    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:41.423464    4516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 13:47:41.420794    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:47:41.426496    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:41.426879    4516 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 13:47:41.426879    4516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 13:47:41.427063    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:47:41.427423    4516 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:47:41.428047    4516 kapi.go:59] client config for functional-200200: &rest.Config{Host:"https://172.18.211.78:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-200200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-200200\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 13:47:41.428781    4516 addons.go:234] Setting addon default-storageclass=true in "functional-200200"
	W0528 13:47:41.428908    4516 addons.go:243] addon default-storageclass should already be in state true
	I0528 13:47:41.428908    4516 host.go:66] Checking if "functional-200200" exists ...
	I0528 13:47:41.430167    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:47:41.536458    4516 request.go:629] Waited for 192.0877ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/default/serviceaccounts
	I0528 13:47:41.536458    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/default/serviceaccounts
	I0528 13:47:41.536458    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.536458    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.536458    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.540191    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:41.541148    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.541196    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.541196    4516 round_trippers.go:580]     Content-Length: 261
	I0528 13:47:41.541254    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.541254    4516 round_trippers.go:580]     Audit-Id: 844f50b1-0e94-42b0-9663-c2ea38f009a6
	I0528 13:47:41.541254    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.541254    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.541254    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.541254    4516 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"623"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e441e598-72a1-40e5-9b08-1924367ad121","resourceVersion":"335","creationTimestamp":"2024-05-28T20:44:55Z"}}]}
	I0528 13:47:41.541254    4516 default_sa.go:45] found service account: "default"
	I0528 13:47:41.541254    4516 default_sa.go:55] duration metric: took 196.8835ms for default service account to be created ...
	I0528 13:47:41.541254    4516 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 13:47:41.727050    4516 request.go:629] Waited for 185.0445ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:41.727050    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/namespaces/kube-system/pods
	I0528 13:47:41.727050    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.727050    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.727050    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.731984    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:41.732172    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.732172    4516 round_trippers.go:580]     Audit-Id: 5f50dc10-c3bf-4530-8e2a-79b4c4ca70d1
	I0528 13:47:41.732172    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.732172    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.732172    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.732172    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.732172    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.733087    4516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"623"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-v57rj","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"a1552367-c51a-4f98-ab64-a7f090b4f28b","resourceVersion":"605","creationTimestamp":"2024-05-28T20:44:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85bafc8c-e4f5-47d9-8336-e1de98adfcba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T20:44:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85bafc8c-e4f5-47d9-8336-e1de98adfcba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50710 chars]
	I0528 13:47:41.735653    4516 system_pods.go:86] 7 kube-system pods found
	I0528 13:47:41.735733    4516 system_pods.go:89] "coredns-7db6d8ff4d-v57rj" [a1552367-c51a-4f98-ab64-a7f090b4f28b] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "etcd-functional-200200" [a5cc1ae2-d338-4a08-9a90-3ee5ff5e04b7] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "kube-apiserver-functional-200200" [1562eaca-402c-4134-9e2c-30da4d4a592e] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "kube-controller-manager-functional-200200" [4044c46e-3c64-4904-8e8e-88dc2dcd9446] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "kube-proxy-rnt88" [a5657586-8c98-4a0a-9ef7-cac5630f1de6] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "kube-scheduler-functional-200200" [5d092a1b-e22b-4786-9aea-7b0993dfde51] Running
	I0528 13:47:41.735733    4516 system_pods.go:89] "storage-provisioner" [8091a673-4937-4785-9a43-23a648f10e03] Running
	I0528 13:47:41.735733    4516 system_pods.go:126] duration metric: took 194.4789ms to wait for k8s-apps to be running ...
	I0528 13:47:41.735733    4516 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 13:47:41.737268    4516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 13:47:41.778935    4516 system_svc.go:56] duration metric: took 43.202ms WaitForService to wait for kubelet
	I0528 13:47:41.778935    4516 kubeadm.go:576] duration metric: took 2.6031405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 13:47:41.778935    4516 node_conditions.go:102] verifying NodePressure condition ...
	I0528 13:47:41.938888    4516 request.go:629] Waited for 159.7998ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.211.78:8441/api/v1/nodes
	I0528 13:47:41.939139    4516 round_trippers.go:463] GET https://172.18.211.78:8441/api/v1/nodes
	I0528 13:47:41.939279    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:41.939279    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:41.939328    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:41.943246    4516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 13:47:41.943973    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:41.943973    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:41.943973    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:41 GMT
	I0528 13:47:41.943973    4516 round_trippers.go:580]     Audit-Id: 38166877-f3cd-40f1-800c-867d5c2b6d1e
	I0528 13:47:41.944073    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:41.944129    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:41.944129    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:41.944372    4516 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"623"},"items":[{"metadata":{"name":"functional-200200","uid":"1b987c37-282d-4e68-a3b1-23ab15a86979","resourceVersion":"532","creationTimestamp":"2024-05-28T20:44:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-200200","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"functional-200200","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T13_44_42_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0528 13:47:41.944372    4516 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 13:47:41.944971    4516 node_conditions.go:123] node cpu capacity is 2
	I0528 13:47:41.944971    4516 node_conditions.go:105] duration metric: took 166.035ms to run NodePressure ...
	I0528 13:47:41.945042    4516 start.go:240] waiting for startup goroutines ...
	I0528 13:47:43.698588    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:47:43.698588    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:43.698714    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:47:43.728865    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:47:43.728865    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:43.728865    4516 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 13:47:43.728865    4516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 13:47:43.728865    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
	I0528 13:47:45.934306    4516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 13:47:45.934306    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:45.934306    4516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
	I0528 13:47:46.308170    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:47:46.308659    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:46.308896    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:47:46.445764    4516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 13:47:47.260196    4516 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0528 13:47:47.260275    4516 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0528 13:47:47.260275    4516 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0528 13:47:47.260275    4516 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0528 13:47:47.260275    4516 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0528 13:47:47.260370    4516 command_runner.go:130] > pod/storage-provisioner configured
	I0528 13:47:48.536722    4516 main.go:141] libmachine: [stdout =====>] : 172.18.211.78
	
	I0528 13:47:48.536772    4516 main.go:141] libmachine: [stderr =====>] : 
	I0528 13:47:48.536772    4516 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
	I0528 13:47:48.668865    4516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 13:47:48.836001    4516 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0528 13:47:48.836295    4516 round_trippers.go:463] GET https://172.18.211.78:8441/apis/storage.k8s.io/v1/storageclasses
	I0528 13:47:48.836374    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:48.836374    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:48.836374    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:48.842061    4516 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 13:47:48.842199    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:48.842199    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:48.842199    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:48.842252    4516 round_trippers.go:580]     Content-Length: 1273
	I0528 13:47:48.842252    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:48 GMT
	I0528 13:47:48.842252    4516 round_trippers.go:580]     Audit-Id: fb947e06-c937-4816-909b-60fc3d907525
	I0528 13:47:48.842282    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:48.842282    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:48.842307    4516 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"standard","uid":"d60d233e-3731-4fa0-9d06-16e6d6819413","resourceVersion":"429","creationTimestamp":"2024-05-28T20:45:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T20:45:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0528 13:47:48.843125    4516 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d60d233e-3731-4fa0-9d06-16e6d6819413","resourceVersion":"429","creationTimestamp":"2024-05-28T20:45:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T20:45:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0528 13:47:48.843202    4516 round_trippers.go:463] PUT https://172.18.211.78:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 13:47:48.843202    4516 round_trippers.go:469] Request Headers:
	I0528 13:47:48.843202    4516 round_trippers.go:473]     Accept: application/json, */*
	I0528 13:47:48.843202    4516 round_trippers.go:473]     Content-Type: application/json
	I0528 13:47:48.843202    4516 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 13:47:48.847821    4516 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 13:47:48.847821    4516 round_trippers.go:577] Response Headers:
	I0528 13:47:48.848312    4516 round_trippers.go:580]     Audit-Id: 6967f5a5-9ae8-42d1-80b9-fc2a3299001a
	I0528 13:47:48.848312    4516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 13:47:48.848312    4516 round_trippers.go:580]     Content-Type: application/json
	I0528 13:47:48.848312    4516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6b7b9069-82e0-4cfd-9d0b-960d5eab4b37
	I0528 13:47:48.848312    4516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55d3383c-44bc-4b11-ad23-e49bd4d694f6
	I0528 13:47:48.848312    4516 round_trippers.go:580]     Content-Length: 1220
	I0528 13:47:48.848312    4516 round_trippers.go:580]     Date: Tue, 28 May 2024 20:47:48 GMT
	I0528 13:47:48.848467    4516 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d60d233e-3731-4fa0-9d06-16e6d6819413","resourceVersion":"429","creationTimestamp":"2024-05-28T20:45:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T20:45:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0528 13:47:48.851882    4516 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 13:47:48.855013    4516 addons.go:510] duration metric: took 9.6792078s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 13:47:48.855013    4516 start.go:245] waiting for cluster config update ...
	I0528 13:47:48.855685    4516 start.go:254] writing updated cluster config ...
	I0528 13:47:48.868295    4516 ssh_runner.go:195] Run: rm -f paused
	I0528 13:47:49.012415    4516 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 13:47:49.018176    4516 out.go:177] * Done! kubectl is now configured to use "functional-200200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.585638868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.585814470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:23 functional-200200 cri-dockerd[4268]: time="2024-05-28T20:47:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a9c934dae19ce3c0846231906f61ac7c3eca6cb9037fc284702575b500e2a17/resolv.conf as [nameserver 172.18.208.1]"
	May 28 20:47:23 functional-200200 cri-dockerd[4268]: time="2024-05-28T20:47:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/36e5d7fb470c3846e359c6bc0f476ac926ea66b5070ab19282835affb1131b34/resolv.conf as [nameserver 172.18.208.1]"
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.959616967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.960011772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.960239575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.960657581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.966536658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.966806661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.966961463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:23 functional-200200 dockerd[4048]: time="2024-05-28T20:47:23.967318068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:24 functional-200200 cri-dockerd[4268]: time="2024-05-28T20:47:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28e8eebb43ca94523262a00d251114ccbf2435ca2ee0ef127031b79496b74364/resolv.conf as [nameserver 172.18.208.1]"
	May 28 20:47:24 functional-200200 dockerd[4041]: time="2024-05-28T20:47:24.259187899Z" level=info msg="ignoring event" container=6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.263763887Z" level=info msg="shim disconnected" id=6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736 namespace=moby
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.263948886Z" level=warning msg="cleaning up after shim disconnected" id=6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736 namespace=moby
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.263967586Z" level=info msg="cleaning up dead shim" namespace=moby
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.437842410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.437952210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.437972210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:24 functional-200200 dockerd[4048]: time="2024-05-28T20:47:24.438107410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:41 functional-200200 dockerd[4048]: time="2024-05-28T20:47:41.081878978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 20:47:41 functional-200200 dockerd[4048]: time="2024-05-28T20:47:41.082060975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 20:47:41 functional-200200 dockerd[4048]: time="2024-05-28T20:47:41.082090474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 20:47:41 functional-200200 dockerd[4048]: time="2024-05-28T20:47:41.082384969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ea105908efc6d       6e38f40d628db       About a minute ago   Running             storage-provisioner       3                   1a9c934dae19c       storage-provisioner
	138f96a1042a8       cbb01a7bd410d       2 minutes ago        Running             coredns                   1                   28e8eebb43ca9       coredns-7db6d8ff4d-v57rj
	820439ad79e0b       747097150317f       2 minutes ago        Running             kube-proxy                1                   36e5d7fb470c3       kube-proxy-rnt88
	6c0432f0a3385       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       2                   1a9c934dae19c       storage-provisioner
	965bdc26dc30c       25a1387cdab82       2 minutes ago        Running             kube-controller-manager   2                   a415b641c2857       kube-controller-manager-functional-200200
	9873a5f313f1b       91be940803172       2 minutes ago        Running             kube-apiserver            2                   6e590c2babdda       kube-apiserver-functional-200200
	8a9f1802c6a12       a52dc94f0a912       2 minutes ago        Running             kube-scheduler            2                   d568e56bf58a6       kube-scheduler-functional-200200
	20b67dfcea188       3861cfcd7c04c       2 minutes ago        Running             etcd                      2                   27f132e8fc47b       etcd-functional-200200
	70cbb0c9545cb       91be940803172       2 minutes ago        Created             kube-apiserver            1                   3c583f4b89621       kube-apiserver-functional-200200
	f434b254d9712       a52dc94f0a912       2 minutes ago        Created             kube-scheduler            1                   a328e358e7d89       kube-scheduler-functional-200200
	0083931a2039b       3861cfcd7c04c       2 minutes ago        Created             etcd                      1                   788bf28709015       etcd-functional-200200
	39a267d9dee8f       25a1387cdab82       2 minutes ago        Exited              kube-controller-manager   1                   684e61568c39e       kube-controller-manager-functional-200200
	5cfea97ba8894       cbb01a7bd410d       4 minutes ago        Exited              coredns                   0                   2fd08f1f32153       coredns-7db6d8ff4d-v57rj
	6e66c448cf750       747097150317f       4 minutes ago        Exited              kube-proxy                0                   b7c0ef99a550e       kube-proxy-rnt88
	
	
	==> coredns [138f96a1042a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84fd659ad9ac94c01df8e848594b3a1ced14aceda7cd3c6e3545e3a962a1b1458ec39f5e77d24a8e8c54353ecf39f49081a6595b1a2779ec288b57d4bd17c052
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56097 - 24396 "HINFO IN 6731821934121025722.8367953441523372260. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04401458s
	
	
	==> coredns [5cfea97ba889] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84fd659ad9ac94c01df8e848594b3a1ced14aceda7cd3c6e3545e3a962a1b1458ec39f5e77d24a8e8c54353ecf39f49081a6595b1a2779ec288b57d4bd17c052
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53198 - 61092 "HINFO IN 728764036198153982.1550563011657979365. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.050567395s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1515619459]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-May-2024 20:44:58.310) (total time: 30001ms):
	Trace[1515619459]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:45:28.311)
	Trace[1515619459]: [30.001463337s] [30.001463337s] END
	[INFO] plugin/kubernetes: Trace[868070682]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-May-2024 20:44:58.311) (total time: 30000ms):
	Trace[868070682]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:45:28.311)
	Trace[868070682]: [30.000316939s] [30.000316939s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1375997842]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-May-2024 20:44:58.311) (total time: 30000ms):
	Trace[1375997842]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:45:28.312)
	Trace[1375997842]: [30.000660619s] [30.000660619s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-200200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-200200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=functional-200200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T13_44_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:44:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-200200
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:49:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:49:24 +0000   Tue, 28 May 2024 20:44:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:49:24 +0000   Tue, 28 May 2024 20:44:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:49:24 +0000   Tue, 28 May 2024 20:44:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:49:24 +0000   Tue, 28 May 2024 20:44:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.211.78
	  Hostname:    functional-200200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 044f79f88e834868bbb7725c13cc92ff
	  System UUID:                daf5b1a4-919f-3744-b68f-85d9ce54c10c
	  Boot ID:                    12877702-b5af-42b8-b4a3-d4e333c2a59b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-v57rj                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m39s
	  kube-system                 etcd-functional-200200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-apiserver-functional-200200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-functional-200200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-rnt88                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-functional-200200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m36s                  kube-proxy       
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)        kubelet          Node functional-200200 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)        kubelet          Node functional-200200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)        kubelet          Node functional-200200 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m53s                  kubelet          Node functional-200200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s                  kubelet          Node functional-200200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s                  kubelet          Node functional-200200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m51s                  kubelet          Node functional-200200 status is now: NodeReady
	  Normal  RegisteredNode           4m39s                  node-controller  Node functional-200200 event: Registered Node functional-200200 in Controller
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m17s)  kubelet          Node functional-200200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m17s)  kubelet          Node functional-200200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x7 over 2m17s)  kubelet          Node functional-200200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           119s                   node-controller  Node functional-200200 event: Registered Node functional-200200 in Controller
	
	
	==> dmesg <==
	[  +5.395361] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.665011] systemd-fstab-generator[1524]: Ignoring "noauto" option for root device
	[  +6.104516] systemd-fstab-generator[1721]: Ignoring "noauto" option for root device
	[  +0.101791] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.019464] systemd-fstab-generator[2132]: Ignoring "noauto" option for root device
	[  +0.117213] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.467104] systemd-fstab-generator[2372]: Ignoring "noauto" option for root device
	[  +0.207211] kauditd_printk_skb: 12 callbacks suppressed
	[May28 20:45] kauditd_printk_skb: 69 callbacks suppressed
	[May28 20:46] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +0.672623] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.270673] systemd-fstab-generator[3613]: Ignoring "noauto" option for root device
	[  +0.285617] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[May28 20:47] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.809289] systemd-fstab-generator[4217]: Ignoring "noauto" option for root device
	[  +0.200230] systemd-fstab-generator[4229]: Ignoring "noauto" option for root device
	[  +0.210467] systemd-fstab-generator[4241]: Ignoring "noauto" option for root device
	[  +0.290288] systemd-fstab-generator[4256]: Ignoring "noauto" option for root device
	[  +0.887976] systemd-fstab-generator[4412]: Ignoring "noauto" option for root device
	[  +4.143634] systemd-fstab-generator[5183]: Ignoring "noauto" option for root device
	[  +0.116445] kauditd_printk_skb: 188 callbacks suppressed
	[  +5.865814] kauditd_printk_skb: 47 callbacks suppressed
	[ +12.265147] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.467397] systemd-fstab-generator[6170]: Ignoring "noauto" option for root device
	[  +1.833846] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0083931a2039] <==
	
	
	==> etcd [20b67dfcea18] <==
	{"level":"info","ts":"2024-05-28T20:47:18.929085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T20:47:18.929094Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T20:47:18.929385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 switched to configuration voters=(4274987356248865105)"}
	{"level":"info","ts":"2024-05-28T20:47:18.92944Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"61948c2756bd3f54","local-member-id":"3b53ce59c47ff951","added-peer-id":"3b53ce59c47ff951","added-peer-peer-urls":["https://172.18.211.78:2380"]}
	{"level":"info","ts":"2024-05-28T20:47:18.929525Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"61948c2756bd3f54","local-member-id":"3b53ce59c47ff951","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:47:18.929629Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:47:18.934252Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T20:47:18.934891Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3b53ce59c47ff951","initial-advertise-peer-urls":["https://172.18.211.78:2380"],"listen-peer-urls":["https://172.18.211.78:2380"],"advertise-client-urls":["https://172.18.211.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.18.211.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T20:47:18.937033Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.18.211.78:2380"}
	{"level":"info","ts":"2024-05-28T20:47:18.938489Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.18.211.78:2380"}
	{"level":"info","ts":"2024-05-28T20:47:18.938867Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T20:47:20.776824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T20:47:20.777347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T20:47:20.777605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 received MsgPreVoteResp from 3b53ce59c47ff951 at term 2"}
	{"level":"info","ts":"2024-05-28T20:47:20.777823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T20:47:20.778053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 received MsgVoteResp from 3b53ce59c47ff951 at term 3"}
	{"level":"info","ts":"2024-05-28T20:47:20.77834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b53ce59c47ff951 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T20:47:20.778612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3b53ce59c47ff951 elected leader 3b53ce59c47ff951 at term 3"}
	{"level":"info","ts":"2024-05-28T20:47:20.783644Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T20:47:20.783596Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3b53ce59c47ff951","local-member-attributes":"{Name:functional-200200 ClientURLs:[https://172.18.211.78:2379]}","request-path":"/0/members/3b53ce59c47ff951/attributes","cluster-id":"61948c2756bd3f54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T20:47:20.786197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T20:47:20.787484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T20:47:20.787531Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T20:47:20.790656Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.211.78:2379"}
	{"level":"info","ts":"2024-05-28T20:47:20.791185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:49:34 up 7 min,  0 users,  load average: 1.22, 0.99, 0.46
	Linux functional-200200 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [70cbb0c9545c] <==
	
	
	==> kube-apiserver [9873a5f313f1] <==
	I0528 20:47:22.641661       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 20:47:22.654156       1 aggregator.go:165] initial CRD sync complete...
	I0528 20:47:22.654255       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 20:47:22.654317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 20:47:22.655142       1 cache.go:39] Caches are synced for autoregister controller
	I0528 20:47:22.656511       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 20:47:22.657097       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 20:47:22.657140       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 20:47:22.676823       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 20:47:22.698397       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 20:47:22.698733       1 policy_source.go:224] refreshing policies
	I0528 20:47:22.699535       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 20:47:22.716614       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 20:47:22.742739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 20:47:22.744430       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 20:47:22.750963       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 20:47:23.529621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0528 20:47:24.160892       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.211.78]
	I0528 20:47:24.163766       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 20:47:24.193070       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 20:47:24.807058       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 20:47:24.833877       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 20:47:24.938702       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 20:47:25.020613       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 20:47:25.032742       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [39a267d9dee8] <==
	
	
	==> kube-controller-manager [965bdc26dc30] <==
	I0528 20:47:35.787134       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0528 20:47:35.787628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="185.596µs"
	I0528 20:47:35.787194       1 shared_informer.go:320] Caches are synced for GC
	I0528 20:47:35.789372       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0528 20:47:35.790903       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0528 20:47:35.791109       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0528 20:47:35.792568       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0528 20:47:35.792617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0528 20:47:35.792749       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0528 20:47:35.858895       1 shared_informer.go:320] Caches are synced for taint
	I0528 20:47:35.859258       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 20:47:35.859493       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-200200"
	I0528 20:47:35.859831       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0528 20:47:35.863084       1 shared_informer.go:320] Caches are synced for daemon sets
	I0528 20:47:35.897907       1 shared_informer.go:320] Caches are synced for attach detach
	I0528 20:47:35.911362       1 shared_informer.go:320] Caches are synced for disruption
	I0528 20:47:35.918215       1 shared_informer.go:320] Caches are synced for stateful set
	I0528 20:47:35.922157       1 shared_informer.go:320] Caches are synced for cronjob
	I0528 20:47:35.957813       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 20:47:35.975039       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 20:47:36.005694       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0528 20:47:36.013587       1 shared_informer.go:320] Caches are synced for job
	I0528 20:47:36.405401       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 20:47:36.444831       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 20:47:36.444938       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6e66c448cf75] <==
	I0528 20:44:57.930993       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:44:57.961794       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.211.78"]
	I0528 20:44:58.092748       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:44:58.093007       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:44:58.093145       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:44:58.106924       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:44:58.107796       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:44:58.108088       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:44:58.110118       1 config.go:192] "Starting service config controller"
	I0528 20:44:58.110757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:44:58.111107       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:44:58.111121       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:44:58.112118       1 config.go:319] "Starting node config controller"
	I0528 20:44:58.112137       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:44:58.213597       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:44:58.213647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:44:58.214611       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [820439ad79e0] <==
	I0528 20:47:24.322185       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:47:24.344850       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.211.78"]
	I0528 20:47:24.403719       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:47:24.403776       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:47:24.403812       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:47:24.416532       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:47:24.416789       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:47:24.416825       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:47:24.418836       1 config.go:192] "Starting service config controller"
	I0528 20:47:24.418868       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:47:24.418904       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:47:24.418910       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:47:24.419312       1 config.go:319] "Starting node config controller"
	I0528 20:47:24.419331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:47:24.519894       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:47:24.520095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:47:24.522789       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a9f1802c6a1] <==
	W0528 20:47:22.648425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:47:22.654584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:47:22.648648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:47:22.654798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:47:22.648928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:47:22.654922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:47:22.649711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:47:22.655049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:47:22.649906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:47:22.655471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:47:22.655815       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:47:22.655966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:47:22.656391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:47:22.658322       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:47:22.657571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:47:22.658588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:47:22.657760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 20:47:22.657938       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:47:22.658702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 20:47:22.659197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 20:47:22.659065       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:47:22.661943       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:47:22.659566       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:47:22.659576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0528 20:47:23.626361       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f434b254d971] <==
	
	
	==> kubelet <==
	May 28 20:47:22 functional-200200 kubelet[5190]: I0528 20:47:22.912740    5190 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5657586-8c98-4a0a-9ef7-cac5630f1de6-xtables-lock\") pod \"kube-proxy-rnt88\" (UID: \"a5657586-8c98-4a0a-9ef7-cac5630f1de6\") " pod="kube-system/kube-proxy-rnt88"
	May 28 20:47:23 functional-200200 kubelet[5190]: I0528 20:47:23.727716    5190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e5d7fb470c3846e359c6bc0f476ac926ea66b5070ab19282835affb1131b34"
	May 28 20:47:24 functional-200200 kubelet[5190]: I0528 20:47:24.046666    5190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e8eebb43ca94523262a00d251114ccbf2435ca2ee0ef127031b79496b74364"
	May 28 20:47:24 functional-200200 kubelet[5190]: I0528 20:47:24.166138    5190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9c934dae19ce3c0846231906f61ac7c3eca6cb9037fc284702575b500e2a17"
	May 28 20:47:25 functional-200200 kubelet[5190]: I0528 20:47:25.249430    5190 scope.go:117] "RemoveContainer" containerID="f0d2d7cdd4e217a08b4e4a0958bfd452d974fd767eb54fb728f461b2a95fa0c6"
	May 28 20:47:25 functional-200200 kubelet[5190]: I0528 20:47:25.249683    5190 scope.go:117] "RemoveContainer" containerID="6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736"
	May 28 20:47:25 functional-200200 kubelet[5190]: E0528 20:47:25.253188    5190 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8091a673-4937-4785-9a43-23a648f10e03)\"" pod="kube-system/storage-provisioner" podUID="8091a673-4937-4785-9a43-23a648f10e03"
	May 28 20:47:26 functional-200200 kubelet[5190]: I0528 20:47:26.275080    5190 scope.go:117] "RemoveContainer" containerID="6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736"
	May 28 20:47:26 functional-200200 kubelet[5190]: E0528 20:47:26.275326    5190 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8091a673-4937-4785-9a43-23a648f10e03)\"" pod="kube-system/storage-provisioner" podUID="8091a673-4937-4785-9a43-23a648f10e03"
	May 28 20:47:26 functional-200200 kubelet[5190]: I0528 20:47:26.275371    5190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d2d7cdd4e217a08b4e4a0958bfd452d974fd767eb54fb728f461b2a95fa0c6"
	May 28 20:47:26 functional-200200 kubelet[5190]: I0528 20:47:26.275419    5190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 20:47:27 functional-200200 kubelet[5190]: I0528 20:47:27.300681    5190 scope.go:117] "RemoveContainer" containerID="6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736"
	May 28 20:47:27 functional-200200 kubelet[5190]: E0528 20:47:27.301050    5190 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8091a673-4937-4785-9a43-23a648f10e03)\"" pod="kube-system/storage-provisioner" podUID="8091a673-4937-4785-9a43-23a648f10e03"
	May 28 20:47:28 functional-200200 kubelet[5190]: I0528 20:47:28.043866    5190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 20:47:40 functional-200200 kubelet[5190]: I0528 20:47:40.893724    5190 scope.go:117] "RemoveContainer" containerID="6c0432f0a3385391ac458a4d2a5d948143cb84c37daa77a7d6f0eb59fbed2736"
	May 28 20:48:17 functional-200200 kubelet[5190]: E0528 20:48:17.977644    5190 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:48:17 functional-200200 kubelet[5190]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:48:17 functional-200200 kubelet[5190]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:48:17 functional-200200 kubelet[5190]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:48:17 functional-200200 kubelet[5190]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:49:17 functional-200200 kubelet[5190]: E0528 20:49:17.974030    5190 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:49:17 functional-200200 kubelet[5190]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:49:17 functional-200200 kubelet[5190]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:49:17 functional-200200 kubelet[5190]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:49:17 functional-200200 kubelet[5190]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [6c0432f0a338] <==
	I0528 20:47:24.178920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 20:47:24.182215       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ea105908efc6] <==
	I0528 20:47:41.180897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 20:47:41.201455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 20:47:41.201523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 20:47:58.627858       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 20:47:58.628358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-200200_53cf0f8c-b43b-449d-9cb8-9a0f3a932983!
	I0528 20:47:58.630246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22ec763f-7c92-41af-bb34-905b797353d9", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-200200_53cf0f8c-b43b-449d-9cb8-9a0f3a932983 became leader
	I0528 20:47:58.728626       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-200200_53cf0f8c-b43b-449d-9cb8-9a0f3a932983!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:49:26.741169    7456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-200200 -n functional-200200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-200200 -n functional-200200: (12.1643364s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-200200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (34.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config unset cpus" to be -""- but got *"W0528 13:52:38.116379   11548 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 config get cpus: exit status 14 (212.5789ms)

                                                
                                                
** stderr ** 
	W0528 13:52:38.354041    1872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0528 13:52:38.354041    1872 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0528 13:52:38.544814   11384 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config get cpus" to be -""- but got *"W0528 13:52:38.773266   15080 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config unset cpus" to be -""- but got *"W0528 13:52:38.994169    9044 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 config get cpus: exit status 14 (247.9148ms)

                                                
                                                
** stderr ** 
	W0528 13:52:39.234886    9824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-200200 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0528 13:52:39.234886    9824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube1\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 service --namespace=default --https --url hello-node: exit status 1 (15.0146369s)

                                                
                                                
** stderr ** 
	W0528 13:54:44.981751   10612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-200200 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url --format={{.IP}}: exit status 1 (15.0188567s)

                                                
                                                
** stderr ** 
	W0528 13:55:00.045130    6800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url: exit status 1 (15.0329983s)

                                                
                                                
** stderr ** 
	W0528 13:55:15.008364    7696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-200200 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- sh -c "ping -c 1 172.18.208.1"
E0528 14:13:09.641314    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- sh -c "ping -c 1 172.18.208.1": exit status 1 (10.4561068s)

                                                
                                                
-- stdout --
	PING 172.18.208.1 (172.18.208.1): 56 data bytes
	
	--- 172.18.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:13:08.161010   11356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.208.1) from pod (busybox-fc5497c4f-qnh86): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- sh -c "ping -c 1 172.18.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- sh -c "ping -c 1 172.18.208.1": exit status 1 (10.4501127s)

                                                
                                                
-- stdout --
	PING 172.18.208.1 (172.18.208.1): 56 data bytes
	
	--- 172.18.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:13:19.112493   14876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.208.1) from pod (busybox-fc5497c4f-s2cwz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0528 14:13:29.660853    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- sh -c "ping -c 1 172.18.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- sh -c "ping -c 1 172.18.208.1": exit status 1 (10.4615864s)

                                                
                                                
-- stdout --
	PING 172.18.208.1 (172.18.208.1): 56 data bytes
	
	--- 172.18.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:13:30.022048    5300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.18.208.1) from pod (busybox-fc5497c4f-xbkdf): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-941700 -n ha-941700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-941700 -n ha-941700: (12.7797611s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 logs -n 25: (9.1364689s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-200200                    | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:56 PDT | 28 May 24 13:56 PDT |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-200200 image build -t     | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:56 PDT | 28 May 24 13:56 PDT |
	|         | localhost/my-image:functional-200200 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-200200 image ls           | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:56 PDT | 28 May 24 13:56 PDT |
	| delete  | -p functional-200200                 | functional-200200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:59 PDT | 28 May 24 14:00 PDT |
	| start   | -p ha-941700 --wait=true             | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:00 PDT | 28 May 24 14:12 PDT |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- apply -f             | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:12 PDT | 28 May 24 14:12 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- rollout status       | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:12 PDT | 28 May 24 14:13 PDT |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- get pods -o          | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- get pods -o          | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-qnh86 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-s2cwz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-xbkdf --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-qnh86 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-s2cwz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-xbkdf --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-qnh86 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-s2cwz -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-xbkdf -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- get pods -o          | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-qnh86              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT |                     |
	|         | busybox-fc5497c4f-qnh86 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-s2cwz              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT |                     |
	|         | busybox-fc5497c4f-s2cwz -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT | 28 May 24 14:13 PDT |
	|         | busybox-fc5497c4f-xbkdf              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-941700 -- exec                 | ha-941700         | minikube1\jenkins | v1.33.1 | 28 May 24 14:13 PDT |                     |
	|         | busybox-fc5497c4f-xbkdf -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.208.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 14:00:41
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 14:00:41.215665   14040 out.go:291] Setting OutFile to fd 1000 ...
	I0528 14:00:41.216243   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:00:41.216243   14040 out.go:304] Setting ErrFile to fd 980...
	I0528 14:00:41.216243   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:00:41.240976   14040 out.go:298] Setting JSON to false
	I0528 14:00:41.243302   14040 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2806,"bootTime":1716927235,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 14:00:41.243302   14040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 14:00:41.248563   14040 out.go:177] * [ha-941700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 14:00:41.252361   14040 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:00:41.252361   14040 notify.go:220] Checking for updates...
	I0528 14:00:41.255220   14040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 14:00:41.258074   14040 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 14:00:41.260263   14040 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 14:00:41.262257   14040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 14:00:41.264970   14040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 14:00:46.671154   14040 out.go:177] * Using the hyperv driver based on user configuration
	I0528 14:00:46.675942   14040 start.go:297] selected driver: hyperv
	I0528 14:00:46.675942   14040 start.go:901] validating driver "hyperv" against <nil>
	I0528 14:00:46.675942   14040 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 14:00:46.724606   14040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 14:00:46.725359   14040 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:00:46.725359   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:00:46.725359   14040 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0528 14:00:46.725359   14040 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 14:00:46.725950   14040 start.go:340] cluster config:
	{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:00:46.726145   14040 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 14:00:46.730432   14040 out.go:177] * Starting "ha-941700" primary control-plane node in "ha-941700" cluster
	I0528 14:00:46.733274   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:00:46.733462   14040 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 14:00:46.733544   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:00:46.733860   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:00:46.733972   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:00:46.734367   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:00:46.734367   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json: {Name:mkf97a72ae148d91fb70226ee26dc84dfdbe9def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:00:46.735707   14040 start.go:360] acquireMachinesLock for ha-941700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:00:46.735707   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700"
	I0528 14:00:46.736275   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:00:46.736275   14040 start.go:125] createHost starting for "" (driver="hyperv")
	I0528 14:00:46.739364   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:00:46.739364   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:00:46.739364   14040 client.go:168] LocalClient.Create starting
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:00:46.741378   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:00:48.823961   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:00:48.824775   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:48.824880   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:00:55.737389   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:00:55.738247   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:55.740868   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:00:56.237651   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:00:56.521678   14040 main.go:141] libmachine: Creating VM...
	I0528 14:00:56.521678   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:00:59.389144   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:00:59.389436   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:59.389436   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:00:59.389436   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:01:01.132003   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:01:01.132542   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:01.132599   14040 main.go:141] libmachine: Creating VHD
	I0528 14:01:01.132599   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:01:04.955374   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D3D2ACC5-F92D-4A9E-AF79-B83C6781150C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:01:04.955374   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:04.955478   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:01:04.955478   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:01:04.966129   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd' -SizeBytes 20000MB
	I0528 14:01:10.786151   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:10.786151   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:10.786939   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-941700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700 -DynamicMemoryEnabled $false
	I0528 14:01:16.806352   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:16.807198   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:16.807350   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700 -Count 2
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\boot2docker.iso'
	I0528 14:01:21.669389   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:21.669700   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:21.669888   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd'
	I0528 14:01:24.448965   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:24.449204   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:24.449204   14040 main.go:141] libmachine: Starting VM...
	I0528 14:01:24.449334   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700
	I0528 14:01:27.527216   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:27.527216   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:27.527216   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:01:27.528214   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:29.872069   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:29.872731   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:29.872803   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:32.453011   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:32.453011   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:33.464790   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:35.716038   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:35.716181   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:35.716295   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:38.249807   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:38.249807   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:39.255088   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:41.465340   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:41.465340   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:41.466359   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:43.977879   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:43.977879   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:44.985783   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:47.225376   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:47.225601   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:47.225701   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:49.731096   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:49.731096   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:50.744523   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:52.994040   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:52.994040   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:52.994739   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:55.544241   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:01:55.545271   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:55.545391   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:57.733778   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:57.733778   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:57.734164   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:01:57.734280   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:59.851438   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:59.851438   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:59.851562   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:02.382291   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:02.382291   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:02.388693   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:02.400838   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:02.400838   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:02:02.523539   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:02:02.523655   14040 buildroot.go:166] provisioning hostname "ha-941700"
	I0528 14:02:02.523721   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:04.688791   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:04.689422   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:04.689478   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:07.241043   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:07.241043   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:07.248562   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:07.249146   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:07.249330   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700 && echo "ha-941700" | sudo tee /etc/hostname
	I0528 14:02:07.391960   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700
	
	I0528 14:02:07.392109   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:09.505992   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:09.506266   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:09.506394   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:12.060897   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:12.060897   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:12.066129   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:12.066963   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:12.066963   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:02:12.205377   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:02:12.205377   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:02:12.205377   14040 buildroot.go:174] setting up certificates
	I0528 14:02:12.205377   14040 provision.go:84] configureAuth start
	I0528 14:02:12.205377   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:14.365189   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:14.366135   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:14.366248   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:16.947810   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:16.948390   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:16.948624   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:19.100669   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:19.100669   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:19.100906   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:21.666381   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:21.666381   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:21.666381   14040 provision.go:143] copyHostCerts
	I0528 14:02:21.667126   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:02:21.667570   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:02:21.667644   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:02:21.668099   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:02:21.669247   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:02:21.669579   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:02:21.669579   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:02:21.669579   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:02:21.671451   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:02:21.671742   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:02:21.671742   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:02:21.672129   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:02:21.673259   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700 san=[127.0.0.1 172.18.216.124 ha-941700 localhost minikube]
	I0528 14:02:21.943414   14040 provision.go:177] copyRemoteCerts
	I0528 14:02:21.956408   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:02:21.956408   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:24.066021   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:24.067070   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:24.067070   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:26.630955   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:26.630955   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:26.630955   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:02:26.743322   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7868073s)
	I0528 14:02:26.743322   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:02:26.743322   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0528 14:02:26.792526   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:02:26.792526   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:02:26.837930   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:02:26.838124   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:02:26.883404   14040 provision.go:87] duration metric: took 14.6779977s to configureAuth
	I0528 14:02:26.883404   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:02:26.884401   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:02:26.884584   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:29.035532   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:29.035532   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:29.036410   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:31.565311   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:31.565494   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:31.571234   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:31.572038   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:31.572038   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:02:31.696668   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:02:31.696668   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:02:31.696668   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:02:31.696668   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:33.835961   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:33.837348   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:33.837456   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:36.364746   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:36.364746   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:36.371283   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:36.372140   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:36.372140   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:02:36.533079   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:02:36.533079   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:38.596976   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:38.596976   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:38.597182   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:41.182753   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:41.182753   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:41.188718   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:41.189361   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:41.189361   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:02:43.297391   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:02:43.297391   14040 machine.go:97] duration metric: took 45.5631359s to provisionDockerMachine
	I0528 14:02:43.297391   14040 client.go:171] duration metric: took 1m56.5577998s to LocalClient.Create
	I0528 14:02:43.297391   14040 start.go:167] duration metric: took 1m56.5577998s to libmachine.API.Create "ha-941700"
	I0528 14:02:43.297391   14040 start.go:293] postStartSetup for "ha-941700" (driver="hyperv")
	I0528 14:02:43.297391   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:02:43.311613   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:02:43.311613   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:45.469126   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:45.469126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:45.470049   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:48.019098   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:48.019098   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:48.019576   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:02:48.126537   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8149147s)
	I0528 14:02:48.139311   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:02:48.146353   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:02:48.146353   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:02:48.146671   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:02:48.147513   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:02:48.147513   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:02:48.159490   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:02:48.178118   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:02:48.230993   14040 start.go:296] duration metric: took 4.9335916s for postStartSetup
	I0528 14:02:48.235922   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:50.376755   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:50.376755   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:50.377480   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:52.895880   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:52.895880   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:52.895880   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:02:52.898951   14040 start.go:128] duration metric: took 2m6.1624287s to createHost
	I0528 14:02:52.898951   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:55.001527   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:55.001527   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:55.001741   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:57.479140   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:57.479549   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:57.484541   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:57.484981   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:57.485046   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:02:57.611158   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930177.606966171
	
	I0528 14:02:57.611158   14040 fix.go:216] guest clock: 1716930177.606966171
	I0528 14:02:57.611158   14040 fix.go:229] Guest: 2024-05-28 14:02:57.606966171 -0700 PDT Remote: 2024-05-28 14:02:52.8989511 -0700 PDT m=+131.776057001 (delta=4.708015071s)
	I0528 14:02:57.611158   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:59.693923   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:59.694054   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:59.694054   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:02.296993   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:02.296993   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:02.305067   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:03:02.305973   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:03:02.305973   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930177
	I0528 14:03:02.444170   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:02:57 UTC 2024
	
	I0528 14:03:02.444170   14040 fix.go:236] clock set: Tue May 28 21:02:57 UTC 2024
	 (err=<nil>)
	I0528 14:03:02.444170   14040 start.go:83] releasing machines lock for "ha-941700", held for 2m15.7081975s
	I0528 14:03:02.444518   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:04.612023   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:04.612417   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:04.612417   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:07.182187   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:07.182187   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:07.187481   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:03:07.187626   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:07.197431   14040 ssh_runner.go:195] Run: cat /version.json
	I0528 14:03:07.197431   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:09.463173   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:09.463173   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:12.092133   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:12.092133   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:12.092133   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:03:12.137789   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:12.137789   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:12.137789   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:03:12.243414   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0559227s)
	I0528 14:03:12.243582   14040 ssh_runner.go:235] Completed: cat /version.json: (5.046141s)
	I0528 14:03:12.256208   14040 ssh_runner.go:195] Run: systemctl --version
	I0528 14:03:12.276960   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 14:03:12.285768   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:03:12.298393   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:03:12.325784   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:03:12.325784   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:03:12.325784   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:03:12.372789   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:03:12.408180   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:03:12.429851   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:03:12.442364   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:03:12.475413   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:03:12.512039   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:03:12.548169   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:03:12.580758   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:03:12.612164   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:03:12.643175   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:03:12.673752   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:03:12.703758   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:03:12.732750   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:03:12.763769   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:12.971815   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:03:13.004031   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:03:13.016618   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:03:13.055523   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:03:13.094094   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:03:13.135174   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:03:13.170627   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:03:13.207218   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:03:13.269076   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:03:13.294336   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:03:13.341736   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:03:13.363058   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:03:13.384564   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:03:13.429678   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:03:13.624925   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:03:13.800851   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:03:13.801018   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:03:13.850564   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:14.047998   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:03:16.583666   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5356299s)
	I0528 14:03:16.598143   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:03:16.643710   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:03:16.685641   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:03:16.879615   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:03:17.079329   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:17.284829   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:03:17.325787   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:03:17.367941   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:17.574418   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:03:17.686861   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:03:17.699673   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:03:17.709690   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:03:17.721680   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:03:17.741181   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:03:17.799611   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:03:17.808594   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:03:17.853206   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:03:17.890736   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:03:17.890736   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:03:17.898425   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:03:17.898425   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:03:17.910426   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:03:17.917657   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:03:17.954346   14040 kubeadm.go:877] updating cluster {Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 14:03:17.954346   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:03:17.963004   14040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 14:03:17.994085   14040 docker.go:685] Got preloaded images: 
	I0528 14:03:17.994114   14040 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0528 14:03:18.007112   14040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 14:03:18.040821   14040 ssh_runner.go:195] Run: which lz4
	I0528 14:03:18.047012   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0528 14:03:18.060079   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 14:03:18.067106   14040 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 14:03:18.067295   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0528 14:03:20.197599   14040 docker.go:649] duration metric: took 2.1503198s to copy over tarball
	I0528 14:03:20.210728   14040 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 14:03:28.720781   14040 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.509954s)
	I0528 14:03:28.720847   14040 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 14:03:28.790317   14040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 14:03:28.807803   14040 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0528 14:03:28.853178   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:29.069485   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:03:32.094179   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0246872s)
	I0528 14:03:32.106990   14040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 14:03:32.129066   14040 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 14:03:32.129066   14040 cache_images.go:84] Images are preloaded, skipping loading
	I0528 14:03:32.129066   14040 kubeadm.go:928] updating node { 172.18.216.124 8443 v1.30.1 docker true true} ...
	I0528 14:03:32.129066   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.216.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:03:32.139182   14040 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 14:03:32.172613   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:03:32.172691   14040 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 14:03:32.172691   14040 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 14:03:32.172820   14040 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.216.124 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-941700 NodeName:ha-941700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.216.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.216.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 14:03:32.172925   14040 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.216.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-941700"
	  kubeletExtraArgs:
	    node-ip: 172.18.216.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.216.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 14:03:32.172925   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:03:32.185270   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:03:32.213670   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:03:32.214500   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:03:32.226695   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:03:32.242223   14040 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 14:03:32.256710   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 14:03:32.276179   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0528 14:03:32.310564   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:03:32.343610   14040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 14:03:32.377724   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0528 14:03:32.431016   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:03:32.438491   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:03:32.473447   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:32.676515   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:03:32.706000   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.216.124
	I0528 14:03:32.706000   14040 certs.go:194] generating shared ca certs ...
	I0528 14:03:32.706000   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.706706   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:03:32.706706   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:03:32.707437   14040 certs.go:256] generating profile certs ...
	I0528 14:03:32.708184   14040 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:03:32.708349   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt with IP's: []
	I0528 14:03:32.870551   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt ...
	I0528 14:03:32.870551   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt: {Name:mkd6bfba6f9e22f0dca5f732668ee3226d4be5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.872767   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key ...
	I0528 14:03:32.872767   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key: {Name:mkc21a04343ad6467dbed833ebeb4d5c00ed91f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.874160   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533
	I0528 14:03:32.874160   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.223.254]
	I0528 14:03:33.010966   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 ...
	I0528 14:03:33.010966   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533: {Name:mk77fe2a64ed855fe669ad55a966fc5e5a5a715f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.011967   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533 ...
	I0528 14:03:33.011967   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533: {Name:mk2bd2e3ecda0b12866e7a4e1637306ac0f10a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.012758   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:03:33.024793   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:03:33.025665   14040 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:03:33.026738   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt with IP's: []
	I0528 14:03:33.222783   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt ...
	I0528 14:03:33.222783   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt: {Name:mkaec3a95c5cae384c19f826f464eead18aa4ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.224742   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key ...
	I0528 14:03:33.224742   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key: {Name:mkf7c1f4a975cb393ef78c9197c52017ef37c2bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.225724   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:03:33.226204   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:03:33.226456   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:03:33.226643   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:03:33.226778   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:03:33.226929   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:03:33.227078   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:03:33.235287   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:03:33.236287   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:03:33.237016   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:03:33.237103   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:03:33.237421   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:03:33.237929   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:03:33.238146   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:03:33.238649   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:03:33.239094   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.239325   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:33.239581   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.239791   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:03:33.291249   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:03:33.335192   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:03:33.394471   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:03:33.438098   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 14:03:33.481886   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:03:33.523904   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:03:33.567035   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:03:33.600677   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:03:33.636018   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:03:33.679129   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:03:33.720581   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 14:03:33.763385   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:03:33.785005   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:03:33.817774   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.826748   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.838002   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.860084   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:03:33.892480   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:03:33.925706   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.932708   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.945459   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.967049   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:03:33.996070   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:03:34.026843   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.034004   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.045774   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.069462   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:03:34.103274   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:03:34.109206   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:03:34.109206   14040 kubeadm.go:391] StartCluster: {Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:03:34.117749   14040 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 14:03:34.155043   14040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 14:03:34.195800   14040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 14:03:34.228742   14040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 14:03:34.252068   14040 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 14:03:34.252068   14040 kubeadm.go:156] found existing configuration files:
	
	I0528 14:03:34.268296   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 14:03:34.291951   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 14:03:34.307534   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 14:03:34.337065   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 14:03:34.357128   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 14:03:34.369326   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 14:03:34.401032   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 14:03:34.418174   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 14:03:34.432989   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 14:03:34.462724   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 14:03:34.481776   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 14:03:34.493669   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 14:03:34.512075   14040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 14:03:34.920263   14040 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 14:03:49.028042   14040 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 14:03:49.028102   14040 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 14:03:49.028374   14040 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 14:03:49.028662   14040 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 14:03:49.028833   14040 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 14:03:49.029081   14040 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 14:03:49.032162   14040 out.go:204]   - Generating certificates and keys ...
	I0528 14:03:49.032530   14040 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 14:03:49.032628   14040 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 14:03:49.033117   14040 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 14:03:49.033851   14040 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-941700 localhost] and IPs [172.18.216.124 127.0.0.1 ::1]
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-941700 localhost] and IPs [172.18.216.124 127.0.0.1 ::1]
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 14:03:49.034589   14040 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 14:03:49.034745   14040 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 14:03:49.035396   14040 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 14:03:49.035613   14040 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 14:03:49.035808   14040 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 14:03:49.035808   14040 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 14:03:49.038795   14040 out.go:204]   - Booting up control plane ...
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 14:03:49.040351   14040 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 14:03:49.040579   14040 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 14:03:49.040835   14040 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 14:03:49.040869   14040 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 14:03:49.041083   14040 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00150063s
	I0528 14:03:49.041251   14040 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 14:03:49.041518   14040 kubeadm.go:309] [api-check] The API server is healthy after 8.003115572s
	I0528 14:03:49.041707   14040 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 14:03:49.042061   14040 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 14:03:49.042061   14040 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 14:03:49.042481   14040 kubeadm.go:309] [mark-control-plane] Marking the node ha-941700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 14:03:49.042481   14040 kubeadm.go:309] [bootstrap-token] Using token: pykq7r.snsu11ybm3wlpgyb
	I0528 14:03:49.046616   14040 out.go:204]   - Configuring RBAC rules ...
	I0528 14:03:49.047523   14040 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 14:03:49.047768   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 14:03:49.048069   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 14:03:49.048380   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 14:03:49.048429   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 14:03:49.048429   14040 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 14:03:49.048429   14040 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 14:03:49.048429   14040 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 14:03:49.048429   14040 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 14:03:49.048429   14040 kubeadm.go:309] 
	I0528 14:03:49.048429   14040 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 14:03:49.048429   14040 kubeadm.go:309] 
	I0528 14:03:49.049453   14040 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 14:03:49.049453   14040 kubeadm.go:309] 
	I0528 14:03:49.049635   14040 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 14:03:49.049811   14040 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 14:03:49.049924   14040 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 14:03:49.049924   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 14:03:49.050035   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 14:03:49.050035   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 14:03:49.050035   14040 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 14:03:49.050822   14040 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 14:03:49.050822   14040 kubeadm.go:309] 
	I0528 14:03:49.051090   14040 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 14:03:49.051337   14040 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 14:03:49.051417   14040 kubeadm.go:309] 
	I0528 14:03:49.051687   14040 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pykq7r.snsu11ybm3wlpgyb \
	I0528 14:03:49.052034   14040 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea \
	I0528 14:03:49.052157   14040 kubeadm.go:309] 	--control-plane 
	I0528 14:03:49.052157   14040 kubeadm.go:309] 
	I0528 14:03:49.052247   14040 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 14:03:49.052371   14040 kubeadm.go:309] 
	I0528 14:03:49.052624   14040 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pykq7r.snsu11ybm3wlpgyb \
	I0528 14:03:49.053055   14040 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 14:03:49.053099   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:03:49.053099   14040 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 14:03:49.055210   14040 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 14:03:49.072907   14040 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 14:03:49.080197   14040 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 14:03:49.080197   14040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 14:03:49.127881   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 14:03:49.726245   14040 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 14:03:49.742480   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:49.742480   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700 minikube.k8s.io/updated_at=2024_05_28T14_03_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=true
	I0528 14:03:49.771127   14040 ops.go:34] apiserver oom_adj: -16
	I0528 14:03:49.980859   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:50.489017   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:50.991343   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:51.479624   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:51.985188   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:52.487761   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:52.988664   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:53.491610   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:53.995576   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:54.481920   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:54.985228   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:55.485682   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:55.991805   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:56.491742   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:56.980637   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:57.485868   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:57.988139   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:58.490622   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:58.993516   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:59.493373   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:59.979148   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:00.483408   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:00.989055   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:01.494943   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:01.984817   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:02.490629   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:02.994067   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:03.147615   14040 kubeadm.go:1107] duration metric: took 13.4213426s to wait for elevateKubeSystemPrivileges
	W0528 14:04:03.147762   14040 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 14:04:03.147762   14040 kubeadm.go:393] duration metric: took 29.038497s to StartCluster
	I0528 14:04:03.147762   14040 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:04:03.147762   14040 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:04:03.149375   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:04:03.150660   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 14:04:03.150800   14040 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:04:03.150800   14040 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 14:04:03.150866   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:04:03.150931   14040 addons.go:69] Setting storage-provisioner=true in profile "ha-941700"
	I0528 14:04:03.150931   14040 addons.go:69] Setting default-storageclass=true in profile "ha-941700"
	I0528 14:04:03.150931   14040 addons.go:234] Setting addon storage-provisioner=true in "ha-941700"
	I0528 14:04:03.151087   14040 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-941700"
	I0528 14:04:03.151087   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:04:03.151228   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:04:03.151228   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:03.152223   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:03.340405   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 14:04:03.873537   14040 start.go:946] {"host.minikube.internal": 172.18.208.1} host record injected into CoreDNS's ConfigMap
	I0528 14:04:05.444466   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:05.444466   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:05.445462   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:05.445462   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:05.448540   14040 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 14:04:05.446010   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:04:05.449448   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 14:04:05.451081   14040 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 14:04:05.451081   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 14:04:05.451081   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:05.452000   14040 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 14:04:05.452761   14040 addons.go:234] Setting addon default-storageclass=true in "ha-941700"
	I0528 14:04:05.452761   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:04:05.454123   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:07.775432   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:07.775432   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:07.776093   14040 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 14:04:07.776093   14040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 14:04:07.776160   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:07.847895   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:07.847895   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:07.848698   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:10.076562   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:10.076817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:10.076881   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:10.585304   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:04:10.585304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:10.585744   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:04:10.722181   14040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 14:04:12.759321   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:04:12.759321   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:12.759321   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:04:12.908859   14040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 14:04:13.106089   14040 round_trippers.go:463] GET https://172.18.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0528 14:04:13.106089   14040 round_trippers.go:469] Request Headers:
	I0528 14:04:13.106089   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:04:13.106089   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:04:13.121218   14040 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0528 14:04:13.124313   14040 round_trippers.go:463] PUT https://172.18.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 14:04:13.124313   14040 round_trippers.go:469] Request Headers:
	I0528 14:04:13.124313   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:04:13.124313   14040 round_trippers.go:473]     Content-Type: application/json
	I0528 14:04:13.124313   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:04:13.131143   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:04:13.135479   14040 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 14:04:13.139206   14040 addons.go:510] duration metric: took 9.988386s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 14:04:13.139206   14040 start.go:245] waiting for cluster config update ...
	I0528 14:04:13.139206   14040 start.go:254] writing updated cluster config ...
	I0528 14:04:13.142574   14040 out.go:177] 
	I0528 14:04:13.154943   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:04:13.154943   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:04:13.198350   14040 out.go:177] * Starting "ha-941700-m02" control-plane node in "ha-941700" cluster
	I0528 14:04:13.244741   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:04:13.244741   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:04:13.245591   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:04:13.245591   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:04:13.246222   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:04:13.248893   14040 start.go:360] acquireMachinesLock for ha-941700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:04:13.248893   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700-m02"
	I0528 14:04:13.248893   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:04:13.248893   14040 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0528 14:04:13.253739   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:04:13.253739   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:04:13.253739   14040 client.go:168] LocalClient.Create starting
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:04:15.219282   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:04:15.220304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:15.220410   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:04:16.969907   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:04:16.969988   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:16.970072   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:04:18.474965   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:04:18.475155   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:18.475231   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:04:22.222808   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:04:22.222808   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:22.226295   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:04:22.749831   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:04:23.011464   14040 main.go:141] libmachine: Creating VM...
	I0528 14:04:23.011464   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:04:26.008454   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:04:26.009316   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:26.009435   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:04:26.009484   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:04:27.870335   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:04:27.870335   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:27.870335   14040 main.go:141] libmachine: Creating VHD
	I0528 14:04:27.870671   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:04:31.708369   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 76368530-AD82-4A71-9BC8-9335E626BFD1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:04:31.708369   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:31.708752   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:04:31.708752   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:04:31.724581   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:04:34.988806   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:34.989689   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:34.989689   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd' -SizeBytes 20000MB
	I0528 14:04:37.590422   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:37.590422   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:37.591187   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:04:41.351498   14040 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-941700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:04:41.352212   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:41.352212   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700-m02 -DynamicMemoryEnabled $false
	I0528 14:04:43.647841   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:43.647980   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:43.647980   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700-m02 -Count 2
	I0528 14:04:45.847083   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:45.847083   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:45.847744   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\boot2docker.iso'
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd'
	I0528 14:04:51.281115   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:51.281793   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:51.281793   14040 main.go:141] libmachine: Starting VM...
	I0528 14:04:51.281793   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700-m02
	I0528 14:04:54.377812   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:54.378319   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:54.378319   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:04:54.378361   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:04:56.715445   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:56.715526   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:56.715585   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:59.302055   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:59.302055   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:00.308253   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:02.601133   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:02.601664   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:02.601801   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:05.233148   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:05.233148   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:06.245664   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:08.516786   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:08.516829   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:08.516900   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:11.125989   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:11.125989   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:12.142669   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:14.409616   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:14.409616   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:14.409715   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:17.010159   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:17.011261   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:18.016512   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:20.325142   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:20.326147   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:20.326214   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:22.921265   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:22.921594   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:22.921682   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:25.127077   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:25.127077   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:25.127077   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:05:25.128197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:27.323532   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:27.323676   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:27.323752   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:29.906228   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:29.906288   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:29.912560   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:29.922621   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:29.922621   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:05:30.041162   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:05:30.041359   14040 buildroot.go:166] provisioning hostname "ha-941700-m02"
	I0528 14:05:30.041359   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:32.230033   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:32.230163   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:32.230257   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:34.860480   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:34.860480   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:34.865842   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:34.866550   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:34.866550   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700-m02 && echo "ha-941700-m02" | sudo tee /etc/hostname
	I0528 14:05:35.023817   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700-m02
	
	I0528 14:05:35.023817   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:37.273067   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:37.273669   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:37.273741   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:39.902424   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:39.902424   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:39.908769   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:39.908769   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:39.909294   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:05:40.049715   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:05:40.049792   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:05:40.049792   14040 buildroot.go:174] setting up certificates
	I0528 14:05:40.049792   14040 provision.go:84] configureAuth start
	I0528 14:05:40.049792   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:46.933248   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:46.933869   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:46.933971   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:49.522186   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:49.522186   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:49.522186   14040 provision.go:143] copyHostCerts
	I0528 14:05:49.522831   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:05:49.522831   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:05:49.522831   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:05:49.523592   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:05:49.524896   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:05:49.525267   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:05:49.525336   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:05:49.525821   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:05:49.526850   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:05:49.527133   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:05:49.527133   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:05:49.527612   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:05:49.528394   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700-m02 san=[127.0.0.1 172.18.211.44 ha-941700-m02 localhost minikube]
	I0528 14:05:49.668030   14040 provision.go:177] copyRemoteCerts
	I0528 14:05:49.681674   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:05:49.681674   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:51.883949   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:51.884098   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:51.884098   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:54.504197   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:54.504197   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:54.504197   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:05:54.610293   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9285518s)
	I0528 14:05:54.610357   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:05:54.610509   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 14:05:54.657473   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:05:54.657473   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:05:54.701662   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:05:54.702735   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:05:54.756112   14040 provision.go:87] duration metric: took 14.7062886s to configureAuth
	I0528 14:05:54.756112   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:05:54.756650   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:05:54.756732   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:56.964041   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:56.964533   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:56.964533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:59.553819   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:59.553892   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:59.560033   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:59.560584   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:59.560665   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:05:59.684401   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:05:59.684401   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:05:59.685200   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:05:59.685200   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:01.854310   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:01.855039   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:01.855039   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:04.442417   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:04.442417   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:04.447434   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:04.448035   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:04.448237   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.216.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:06:04.599031   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.216.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:06:04.599031   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:06.782725   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:06.782809   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:06.782918   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:09.363720   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:09.364370   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:09.370648   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:09.371261   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:09.371337   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:06:11.503788   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:06:11.503788   14040 machine.go:97] duration metric: took 46.3755792s to provisionDockerMachine
	I0528 14:06:11.503788   14040 client.go:171] duration metric: took 1m58.2498015s to LocalClient.Create
	I0528 14:06:11.503788   14040 start.go:167] duration metric: took 1m58.2498015s to libmachine.API.Create "ha-941700"
	I0528 14:06:11.503788   14040 start.go:293] postStartSetup for "ha-941700-m02" (driver="hyperv")
	I0528 14:06:11.503788   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:06:11.520021   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:06:11.520021   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:13.740931   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:13.740992   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:13.741049   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:16.339365   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:16.340350   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:16.340620   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:16.452613   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9325816s)
	I0528 14:06:16.468378   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:06:16.478308   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:06:16.478308   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:06:16.478482   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:06:16.479863   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:06:16.479863   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:06:16.492285   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:06:16.513060   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:06:16.559235   14040 start.go:296] duration metric: took 5.0554363s for postStartSetup
	I0528 14:06:16.561613   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:21.390284   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:21.390339   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:21.390556   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:06:21.393425   14040 start.go:128] duration metric: took 2m8.1441479s to createHost
	I0528 14:06:21.393520   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:26.173809   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:26.173809   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:26.180395   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:26.181057   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:26.181057   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:06:26.304140   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930386.299865259
	
	I0528 14:06:26.304194   14040 fix.go:216] guest clock: 1716930386.299865259
	I0528 14:06:26.304194   14040 fix.go:229] Guest: 2024-05-28 14:06:26.299865259 -0700 PDT Remote: 2024-05-28 14:06:21.3934259 -0700 PDT m=+340.270103101 (delta=4.906439359s)
	I0528 14:06:26.304311   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:31.205741   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:31.205993   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:31.212125   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:31.212125   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:31.212125   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930386
	I0528 14:06:31.356411   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:06:26 UTC 2024
	
	I0528 14:06:31.356411   14040 fix.go:236] clock set: Tue May 28 21:06:26 UTC 2024
	 (err=<nil>)
	I0528 14:06:31.356411   14040 start.go:83] releasing machines lock for "ha-941700-m02", held for 2m18.1072283s
	I0528 14:06:31.356804   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:33.572048   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:33.572334   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:33.572423   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:36.215557   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:36.215557   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:36.219711   14040 out.go:177] * Found network options:
	I0528 14:06:36.223881   14040 out.go:177]   - NO_PROXY=172.18.216.124
	W0528 14:06:36.237996   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:06:36.240699   14040 out.go:177]   - NO_PROXY=172.18.216.124
	W0528 14:06:36.242973   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:06:36.244385   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:06:36.247281   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:06:36.247497   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:36.259768   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 14:06:36.259768   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:38.504051   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:38.504051   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:38.504169   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:38.508127   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:38.508302   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:38.508302   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:41.278835   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:41.278835   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:41.279417   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:41.310521   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:41.310521   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:41.311551   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:41.434021   14040 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1742421s)
	I0528 14:06:41.434807   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1873551s)
	W0528 14:06:41.434807   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:06:41.448176   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:06:41.479124   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:06:41.479270   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:06:41.479491   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:06:41.525716   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:06:41.558563   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:06:41.577565   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:06:41.589564   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:06:41.621578   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:06:41.653346   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:06:41.683776   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:06:41.717693   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:06:41.751482   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:06:41.784949   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:06:41.816902   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:06:41.850907   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:06:41.879899   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:06:41.909733   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:42.102165   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:06:42.134735   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:06:42.149369   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:06:42.185372   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:06:42.222313   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:06:42.271534   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:06:42.308159   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:06:42.343073   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:06:42.407970   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:06:42.434720   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:06:42.484664   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:06:42.506117   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:06:42.523571   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:06:42.572814   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:06:42.769548   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:06:42.958438   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:06:42.958599   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:06:43.004304   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:43.213900   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:06:45.756581   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5426761s)
	I0528 14:06:45.768354   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:06:45.806522   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:06:45.847219   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:06:46.064060   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:06:46.275801   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:46.477420   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:06:46.525469   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:06:46.565085   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:46.769708   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:06:46.883085   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:06:46.898387   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:06:46.907314   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:06:46.922316   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:06:46.941492   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:06:46.994967   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:06:47.003865   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:06:47.048150   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:06:47.082114   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:06:47.087838   14040 out.go:177]   - env NO_PROXY=172.18.216.124
	I0528 14:06:47.089541   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:06:47.097599   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:06:47.097599   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:06:47.112315   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:06:47.119203   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:06:47.140686   14040 mustload.go:65] Loading cluster: ha-941700
	I0528 14:06:47.141317   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:06:47.142166   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:06:49.341576   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:49.341911   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:49.341911   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:06:49.342623   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.211.44
	I0528 14:06:49.342623   14040 certs.go:194] generating shared ca certs ...
	I0528 14:06:49.342716   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.342998   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:06:49.343749   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:06:49.343749   14040 certs.go:256] generating profile certs ...
	I0528 14:06:49.344883   14040 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:06:49.344987   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130
	I0528 14:06:49.345230   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.211.44 172.18.223.254]
	I0528 14:06:49.496634   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 ...
	I0528 14:06:49.496634   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130: {Name:mk3007809a45b4e22adf6f2b66085f72973b6137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.497964   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130 ...
	I0528 14:06:49.497964   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130: {Name:mke71e2405d24da228b21d1fd240c74c98071bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.498924   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:06:49.513583   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:06:49.514382   14040 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:06:49.514382   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:06:49.515013   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:06:49.515207   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:06:49.516355   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:06:49.517356   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:06:49.517356   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:06:49.517356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:06:49.517356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:49.518367   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:06:49.518367   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:54.446486   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:06:54.446714   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:54.446893   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:06:54.542193   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 14:06:54.551643   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 14:06:54.592234   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 14:06:54.600755   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0528 14:06:54.633894   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 14:06:54.641414   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 14:06:54.672954   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 14:06:54.680833   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0528 14:06:54.714494   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 14:06:54.722067   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 14:06:54.762616   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 14:06:54.770420   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0528 14:06:54.796185   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:06:54.848335   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:06:54.904155   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:06:54.957932   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:06:55.003837   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 14:06:55.049357   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:06:55.100311   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:06:55.153656   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:06:55.201155   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:06:55.249320   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:06:55.297280   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:06:55.349829   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 14:06:55.384899   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0528 14:06:55.416859   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 14:06:55.450522   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0528 14:06:55.484373   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 14:06:55.518616   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0528 14:06:55.554599   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 14:06:55.599480   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:06:55.621213   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:06:55.654405   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.661632   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.673708   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.694837   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:06:55.726609   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:06:55.757127   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.764804   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.778174   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.798574   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:06:55.832906   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:06:55.866041   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.872458   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.885563   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.906352   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:06:55.942093   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:06:55.950745   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:06:55.951053   14040 kubeadm.go:928] updating node {m02 172.18.211.44 8443 v1.30.1 docker true true} ...
	I0528 14:06:55.951241   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.211.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:06:55.951323   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:06:55.963722   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:06:55.991401   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:06:55.991616   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:06:56.005091   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:06:56.021925   14040 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 14:06:56.037314   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0528 14:06:57.190502   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:06:57.202477   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:06:57.210453   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 14:06:57.210770   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 14:06:57.238048   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:06:57.249957   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:06:57.322511   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 14:06:57.322789   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 14:06:57.604099   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:06:57.689301   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:06:57.702202   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:06:57.724091   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 14:06:57.724743   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 14:06:58.711363   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 14:06:58.730386   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 14:06:58.763792   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:06:58.795712   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 14:06:58.840258   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:06:58.847746   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:06:58.887458   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:59.087194   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:06:59.120443   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:06:59.121859   14040 start.go:316] joinCluster: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:06:59.121859   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 14:06:59.121859   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:07:01.291653   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:07:01.292114   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:07:01.292114   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:07:03.967064   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:07:03.967740   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:07:03.967740   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:07:04.171439   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0495692s)
	I0528 14:07:04.171539   14040 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:07:04.171689   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pjbq9b.3kg1wcqsonrr0ls0 --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m02 --control-plane --apiserver-advertise-address=172.18.211.44 --apiserver-bind-port=8443"
	I0528 14:07:49.251617   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pjbq9b.3kg1wcqsonrr0ls0 --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m02 --control-plane --apiserver-advertise-address=172.18.211.44 --apiserver-bind-port=8443": (45.0796647s)
	I0528 14:07:49.251699   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 14:07:50.167917   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700-m02 minikube.k8s.io/updated_at=2024_05_28T14_07_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=false
	I0528 14:07:50.358582   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-941700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 14:07:50.540943   14040 start.go:318] duration metric: took 51.4189125s to joinCluster
	I0528 14:07:50.541104   14040 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:07:50.544511   14040 out.go:177] * Verifying Kubernetes components...
	I0528 14:07:50.541966   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:07:50.559366   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:07:50.986707   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:07:51.024614   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:07:51.024614   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 14:07:51.024614   14040 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.223.254:8443 with https://172.18.216.124:8443
	I0528 14:07:51.024614   14040 node_ready.go:35] waiting up to 6m0s for node "ha-941700-m02" to be "Ready" ...
	I0528 14:07:51.024614   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:51.024614   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:51.024614   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:51.024614   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:51.050927   14040 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0528 14:07:51.540393   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:51.540393   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:51.540393   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:51.540393   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:51.547288   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:07:52.033001   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:52.033116   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:52.033116   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:52.033116   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:52.038523   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:52.538048   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:52.538106   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:52.538106   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:52.538106   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:52.543866   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:53.029240   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:53.029240   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:53.029240   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:53.029240   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:53.369081   14040 round_trippers.go:574] Response Status: 200 OK in 339 milliseconds
	I0528 14:07:53.370439   14040 node_ready.go:53] node "ha-941700-m02" has status "Ready":"False"
	I0528 14:07:53.536687   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:53.536687   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:53.536687   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:53.536687   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:53.542317   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:54.028702   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:54.028934   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:54.028934   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:54.028934   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:54.034290   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:54.529251   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:54.529363   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:54.529363   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:54.529363   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:54.534072   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:55.035419   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:55.035419   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:55.035419   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:55.035419   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:55.057174   14040 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0528 14:07:55.530153   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:55.530153   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:55.530153   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:55.530153   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:55.534837   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:55.535805   14040 node_ready.go:53] node "ha-941700-m02" has status "Ready":"False"
	I0528 14:07:56.040731   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:56.040731   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:56.040731   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:56.040731   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:56.045600   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:56.535680   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:56.535680   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:56.535680   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:56.535680   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:56.541251   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.025259   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.025473   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.025473   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.025473   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.061115   14040 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0528 14:07:57.539429   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.539429   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.539429   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.539429   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.545022   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.547393   14040 node_ready.go:49] node "ha-941700-m02" has status "Ready":"True"
	I0528 14:07:57.547393   14040 node_ready.go:38] duration metric: took 6.5227651s for node "ha-941700-m02" to be "Ready" ...
	I0528 14:07:57.547393   14040 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:07:57.547393   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:07:57.547393   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.547393   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.547393   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.557212   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:07:57.568427   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.569341   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8prdw
	I0528 14:07:57.569341   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.569341   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.569341   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.583201   14040 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0528 14:07:57.584050   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.584050   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.584050   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.584050   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.593043   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:07:57.595245   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.595245   14040 pod_ready.go:81] duration metric: took 26.8173ms for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.595245   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.595245   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kmzqb
	I0528 14:07:57.595245   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.595245   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.595245   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.599012   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:57.600181   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.600181   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.600181   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.600181   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.604012   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:57.605096   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.605096   14040 pod_ready.go:81] duration metric: took 9.8507ms for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.605096   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.605096   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700
	I0528 14:07:57.605096   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.605096   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.605096   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.610062   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:57.610915   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.610915   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.610915   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.610915   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.616515   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.617842   14040 pod_ready.go:92] pod "etcd-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.617842   14040 pod_ready.go:81] duration metric: took 12.7464ms for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.617842   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.618531   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:57.618531   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.618531   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.618531   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.623125   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:57.624584   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.624584   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.624670   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.624670   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.627938   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:58.132295   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:58.132295   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.132295   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.132295   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.138451   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:07:58.142744   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:58.142875   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.142875   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.142875   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.147745   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:58.618589   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:58.618830   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.618830   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.618830   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.624556   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:58.624978   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:58.624978   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.624978   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.624978   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.632122   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:07:59.121182   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:59.121182   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.121182   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.121182   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.126056   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.127637   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:59.127637   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.127637   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.127637   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.131756   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.630528   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:59.630528   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.630528   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.630528   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.636172   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:59.637557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:59.637557   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.637557   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.637633   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.641873   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.642769   14040 pod_ready.go:102] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 14:08:00.129469   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:00.129469   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.129469   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.129469   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.135895   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:00.137296   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:00.137296   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.137296   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.137296   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.141657   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:00.630213   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:00.630441   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.630441   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.630441   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.638868   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:08:00.640262   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:00.640378   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.640378   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.640378   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.644274   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.129970   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:01.129970   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.129970   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.129970   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.135616   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.136779   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.136779   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.136779   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.136779   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.140055   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.141490   14040 pod_ready.go:92] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:01.141490   14040 pod_ready.go:81] duration metric: took 3.5236405s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.141490   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.141490   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:08:01.141490   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.141490   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.141490   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.145867   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:01.146681   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:01.146681   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.146681   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.146681   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.150366   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.151104   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:01.151181   14040 pod_ready.go:81] duration metric: took 9.6908ms for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.151181   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.151264   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:01.151306   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.151342   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.151342   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.156690   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.158014   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.158014   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.158014   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.158014   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.162638   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:01.658616   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:01.658616   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.658707   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.658707   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.664004   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.665794   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.665794   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.665794   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.665794   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.669833   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.156036   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:02.156036   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.156148   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.156148   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.160530   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.162448   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:02.162572   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.162572   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.162629   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.168506   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:02.659474   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:02.659537   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.659537   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.659537   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.679423   14040 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0528 14:08:02.681487   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:02.681602   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.681602   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.681602   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.685859   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.687209   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:02.687266   14040 pod_ready.go:81] duration metric: took 1.5360822s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.687266   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.687380   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:08:02.687436   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.687436   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.687476   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.692184   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.751991   14040 request.go:629] Waited for 58.7585ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:02.752205   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:02.752244   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.752244   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.752280   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.757409   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:02.759134   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:02.759134   14040 pod_ready.go:81] duration metric: took 71.8673ms for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.759193   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.954469   14040 request.go:629] Waited for 194.981ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:02.954557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:02.954557   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.954675   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.954675   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.961489   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:03.142381   14040 request.go:629] Waited for 179.6949ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.142660   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.142660   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.142660   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.142660   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.148764   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.345238   14040 request.go:629] Waited for 78.0303ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.345238   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.345365   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.345365   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.345438   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.351952   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.549536   14040 request.go:629] Waited for 196.3033ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.549649   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.549867   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.549867   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.549968   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.556252   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.767102   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.767102   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.767102   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.767102   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.772696   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:03.940326   14040 request.go:629] Waited for 166.2176ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.940579   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.940579   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.940579   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.940668   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.948952   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:08:04.267301   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:04.267301   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.267301   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.267301   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.272149   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.346569   14040 request.go:629] Waited for 72.9879ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:04.346660   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:04.346730   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.346730   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.346730   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.351310   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.353347   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:04.353347   14040 pod_ready.go:81] duration metric: took 1.5941501s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.353430   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.548044   14040 request.go:629] Waited for 194.5456ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:08:04.548317   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:08:04.548317   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.548317   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.548317   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.553209   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.750320   14040 request.go:629] Waited for 195.3397ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:04.750480   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:04.750480   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.750480   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.750480   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.756315   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:04.757544   14040 pod_ready.go:92] pod "kube-proxy-hrwn5" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:04.757637   14040 pod_ready.go:81] duration metric: took 404.2056ms for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.757637   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.952876   14040 request.go:629] Waited for 194.9362ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:08:04.952876   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:08:04.953154   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.953154   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.953154   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.958350   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.141290   14040 request.go:629] Waited for 180.98ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.141459   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.141513   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.141531   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.141531   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.147119   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.147842   14040 pod_ready.go:92] pod "kube-proxy-spzwh" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.147842   14040 pod_ready.go:81] duration metric: took 390.2043ms for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.147842   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.344273   14040 request.go:629] Waited for 196.252ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:08:05.344962   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:08:05.345115   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.345115   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.345473   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.350567   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.547560   14040 request.go:629] Waited for 196.2273ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:05.547867   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:05.547867   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.547867   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.547867   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.557084   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:08:05.558505   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.558505   14040 pod_ready.go:81] duration metric: took 410.662ms for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.558573   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.754064   14040 request.go:629] Waited for 195.1988ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:08:05.754284   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:08:05.754284   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.754357   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.754389   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.759787   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.944025   14040 request.go:629] Waited for 183.367ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.944227   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.944227   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.944227   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.944227   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.948822   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:05.949974   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.949974   14040 pod_ready.go:81] duration metric: took 391.4009ms for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.949974   14040 pod_ready.go:38] duration metric: took 8.4025639s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:08:05.950156   14040 api_server.go:52] waiting for apiserver process to appear ...
	I0528 14:08:05.965510   14040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 14:08:05.998177   14040 api_server.go:72] duration metric: took 15.4569891s to wait for apiserver process to appear ...
	I0528 14:08:05.998277   14040 api_server.go:88] waiting for apiserver healthz status ...
	I0528 14:08:05.998373   14040 api_server.go:253] Checking apiserver healthz at https://172.18.216.124:8443/healthz ...
	I0528 14:08:06.005428   14040 api_server.go:279] https://172.18.216.124:8443/healthz returned 200:
	ok
	I0528 14:08:06.005428   14040 round_trippers.go:463] GET https://172.18.216.124:8443/version
	I0528 14:08:06.005428   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.005428   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.005428   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.007415   14040 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 14:08:06.007415   14040 api_server.go:141] control plane version: v1.30.1
	I0528 14:08:06.007415   14040 api_server.go:131] duration metric: took 9.138ms to wait for apiserver health ...
	I0528 14:08:06.007415   14040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 14:08:06.149594   14040 request.go:629] Waited for 141.9687ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.149750   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.149750   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.149750   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.149750   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.160589   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:08:06.166571   14040 system_pods.go:59] 17 kube-system pods found
	I0528 14:08:06.166571   14040 system_pods.go:61] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:08:06.167584   14040 system_pods.go:74] duration metric: took 160.1685ms to wait for pod list to return data ...
	I0528 14:08:06.167584   14040 default_sa.go:34] waiting for default service account to be created ...
	I0528 14:08:06.354053   14040 request.go:629] Waited for 186.1466ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:08:06.354230   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:08:06.354302   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.354326   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.354326   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.359086   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:06.360502   14040 default_sa.go:45] found service account: "default"
	I0528 14:08:06.360502   14040 default_sa.go:55] duration metric: took 192.9175ms for default service account to be created ...
	I0528 14:08:06.360502   14040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 14:08:06.541490   14040 request.go:629] Waited for 180.8815ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.541795   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.541795   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.541795   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.541795   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.552348   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:08:06.560031   14040 system_pods.go:86] 17 kube-system pods found
	I0528 14:08:06.560080   14040 system_pods.go:89] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:08:06.560205   14040 system_pods.go:89] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:08:06.560273   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:08:06.560337   14040 system_pods.go:89] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:08:06.560450   14040 system_pods.go:89] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:08:06.560450   14040 system_pods.go:89] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:08:06.560450   14040 system_pods.go:126] duration metric: took 199.9481ms to wait for k8s-apps to be running ...
	I0528 14:08:06.560450   14040 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 14:08:06.572807   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:08:06.597559   14040 system_svc.go:56] duration metric: took 37.1082ms WaitForService to wait for kubelet
	I0528 14:08:06.597621   14040 kubeadm.go:576] duration metric: took 16.0564319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:08:06.597621   14040 node_conditions.go:102] verifying NodePressure condition ...
	I0528 14:08:06.745944   14040 request.go:629] Waited for 148.1484ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes
	I0528 14:08:06.746169   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes
	I0528 14:08:06.746169   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.746207   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.746207   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.752978   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:06.754989   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:08:06.755043   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:08:06.755113   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:08:06.755113   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:08:06.755113   14040 node_conditions.go:105] duration metric: took 157.4915ms to run NodePressure ...
	I0528 14:08:06.755113   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:08:06.755169   14040 start.go:254] writing updated cluster config ...
	I0528 14:08:06.758155   14040 out.go:177] 
	I0528 14:08:06.774211   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:08:06.774211   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:08:06.780249   14040 out.go:177] * Starting "ha-941700-m03" control-plane node in "ha-941700" cluster
	I0528 14:08:06.783853   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:08:06.783853   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:08:06.783853   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:08:06.783853   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:08:06.783853   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:08:06.790874   14040 start.go:360] acquireMachinesLock for ha-941700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:08:06.790874   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700-m03"
	I0528 14:08:06.790874   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:08:06.791862   14040 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0528 14:08:06.795867   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:08:06.795867   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:08:06.795867   14040 client.go:168] LocalClient.Create starting
	I0528 14:08:06.795867   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:08:08.775934   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:08:08.775934   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:08.776075   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:08:10.558059   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:08:10.558126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:10.558189   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:08:12.083225   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:08:12.083850   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:12.083986   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:08:15.968955   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:08:15.968955   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:15.972532   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:08:16.422039   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:08:16.557645   14040 main.go:141] libmachine: Creating VM...
	I0528 14:08:16.557645   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:19.572420   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:08:21.373627   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:08:21.373627   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:21.373718   14040 main.go:141] libmachine: Creating VHD
	I0528 14:08:21.374197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:08:25.281700   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4D7299B5-C95C-4A8E-BF76-6C2E26C8B9BA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:08:25.281946   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:25.281946   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:08:25.282031   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:08:25.291039   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:08:28.562534   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:28.562986   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:28.563050   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd' -SizeBytes 20000MB
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:08:34.925523   14040 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-941700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:08:34.925523   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:34.925966   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700-m03 -DynamicMemoryEnabled $false
	I0528 14:08:37.292940   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:37.292940   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:37.293710   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700-m03 -Count 2
	I0528 14:08:39.566466   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:39.566466   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:39.566597   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\boot2docker.iso'
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd'
	I0528 14:08:45.025713   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:45.026533   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:45.026533   14040 main.go:141] libmachine: Starting VM...
	I0528 14:08:45.026533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700-m03
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:48.273400   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:08:50.713415   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:08:50.714229   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:50.714308   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:08:53.374692   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:53.375370   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:54.388425   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:08:56.713724   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:08:56.713724   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:56.713932   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:08:59.351788   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:59.351788   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:00.366165   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:02.670953   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:02.670953   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:02.672062   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:05.277941   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:09:05.277941   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:06.292842   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:08.593963   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:08.593963   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:08.594197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:11.261357   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:09:11.262143   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:12.273777   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:17.320817   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:17.320817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:17.321719   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:19.564811   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:19.565697   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:19.565697   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:09:19.565844   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:21.842647   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:21.843075   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:21.843164   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:24.496551   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:24.496617   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:24.502487   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:24.513195   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:24.513195   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:09:24.635228   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:09:24.635286   14040 buildroot.go:166] provisioning hostname "ha-941700-m03"
	I0528 14:09:24.635344   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:29.450461   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:29.451124   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:29.458297   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:29.458978   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:29.458978   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700-m03 && echo "ha-941700-m03" | sudo tee /etc/hostname
	I0528 14:09:29.605025   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700-m03
	
	I0528 14:09:29.605131   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:31.833133   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:31.833817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:31.834068   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:34.511029   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:34.511845   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:34.518738   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:34.519494   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:34.519494   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:09:34.665631   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:09:34.665631   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:09:34.665631   14040 buildroot.go:174] setting up certificates
	I0528 14:09:34.665631   14040 provision.go:84] configureAuth start
	I0528 14:09:34.665631   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:41.819579   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:41.820409   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:41.820409   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:44.478304   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:44.478304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:44.478392   14040 provision.go:143] copyHostCerts
	I0528 14:09:44.478523   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:09:44.478523   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:09:44.478523   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:09:44.479215   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:09:44.480715   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:09:44.481066   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:09:44.481066   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:09:44.481580   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:09:44.481750   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:09:44.481750   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:09:44.481750   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:09:44.481750   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:09:44.484146   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700-m03 san=[127.0.0.1 172.18.222.186 ha-941700-m03 localhost minikube]
	I0528 14:09:44.681391   14040 provision.go:177] copyRemoteCerts
	I0528 14:09:44.698055   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:09:44.698055   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:46.957204   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:46.957204   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:46.957296   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:49.613112   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:49.613112   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:49.613112   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:09:49.708912   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0107798s)
	I0528 14:09:49.708912   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:09:49.708912   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:09:49.753651   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:09:49.754169   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 14:09:49.800940   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:09:49.801567   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:09:49.847832   14040 provision.go:87] duration metric: took 15.182169s to configureAuth
	I0528 14:09:49.847832   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:09:49.848454   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:09:49.848608   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:52.071802   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:52.072196   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:52.072196   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:54.737842   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:54.738761   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:54.744441   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:54.745110   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:54.745110   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:09:54.871652   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:09:54.871785   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:09:54.871922   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:09:54.871922   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:57.084646   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:57.084783   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:57.084917   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:59.699711   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:59.699711   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:59.705132   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:59.705852   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:59.705852   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.216.124"
	Environment="NO_PROXY=172.18.216.124,172.18.211.44"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:09:59.864507   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.216.124
	Environment=NO_PROXY=172.18.216.124,172.18.211.44
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:09:59.864624   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:02.071722   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:02.072230   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:02.072230   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:04.665133   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:04.665133   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:04.671844   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:04.672556   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:04.672556   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:10:06.815575   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:10:06.815575   14040 machine.go:97] duration metric: took 47.2497098s to provisionDockerMachine
	I0528 14:10:06.815575   14040 client.go:171] duration metric: took 2m0.0194558s to LocalClient.Create
	I0528 14:10:06.815575   14040 start.go:167] duration metric: took 2m0.0194558s to libmachine.API.Create "ha-941700"
	I0528 14:10:06.815575   14040 start.go:293] postStartSetup for "ha-941700-m03" (driver="hyperv")
	I0528 14:10:06.815575   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:10:06.829533   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:10:06.829533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:09.081619   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:09.082083   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:09.082206   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:11.758312   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:11.758312   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:11.758625   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:11.867169   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.037506s)
	I0528 14:10:11.881141   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:10:11.888258   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:10:11.888258   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:10:11.888917   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:10:11.889441   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:10:11.889441   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:10:11.905352   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:10:11.931212   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:10:11.982506   14040 start.go:296] duration metric: took 5.1669203s for postStartSetup
	I0528 14:10:11.985169   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:14.245028   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:14.245028   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:14.245909   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:16.941961   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:16.941961   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:16.942493   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:10:16.944851   14040 start.go:128] duration metric: took 2m10.1527163s to createHost
	I0528 14:10:16.944851   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:19.210392   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:19.210392   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:19.210820   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:21.920781   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:21.921528   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:21.927844   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:21.928917   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:21.928917   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:10:22.053349   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930622.053839995
	
	I0528 14:10:22.053349   14040 fix.go:216] guest clock: 1716930622.053839995
	I0528 14:10:22.053349   14040 fix.go:229] Guest: 2024-05-28 14:10:22.053839995 -0700 PDT Remote: 2024-05-28 14:10:16.9448519 -0700 PDT m=+575.821034401 (delta=5.108988095s)
	I0528 14:10:22.053349   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:26.945126   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:26.945126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:26.950981   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:26.951679   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:26.951679   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930622
	I0528 14:10:27.084373   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:10:22 UTC 2024
	
	I0528 14:10:27.084373   14040 fix.go:236] clock set: Tue May 28 21:10:22 UTC 2024
	 (err=<nil>)
	I0528 14:10:27.084373   14040 start.go:83] releasing machines lock for "ha-941700-m03", held for 2m20.2932049s
	I0528 14:10:27.084373   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:29.365819   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:29.366612   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:29.366612   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:32.095892   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:32.095973   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:32.101053   14040 out.go:177] * Found network options:
	I0528 14:10:32.103487   14040 out.go:177]   - NO_PROXY=172.18.216.124,172.18.211.44
	W0528 14:10:32.105577   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.105577   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:10:32.108126   14040 out.go:177]   - NO_PROXY=172.18.216.124,172.18.211.44
	W0528 14:10:32.110565   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.110565   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.111945   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.111945   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:10:32.113891   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:10:32.113891   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:32.127132   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 14:10:32.127396   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:34.450254   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:34.451140   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:34.451140   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:37.225752   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:37.225752   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:37.226316   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:37.258268   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:37.258730   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:37.258943   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:37.409354   14040 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2819474s)
	I0528 14:10:37.410260   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2963579s)
	W0528 14:10:37.410260   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:10:37.422277   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:10:37.457653   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:10:37.457653   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:10:37.457929   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:10:37.511284   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:10:37.543988   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:10:37.566052   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:10:37.578646   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:10:37.611915   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:10:37.647371   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:10:37.684800   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:10:37.719833   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:10:37.754848   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:10:37.795163   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:10:37.833885   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:10:37.866529   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:10:37.897978   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:10:37.927248   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:38.139122   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:10:38.175619   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:10:38.189425   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:10:38.228397   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:10:38.265393   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:10:38.329444   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:10:38.375834   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:10:38.414609   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:10:38.483289   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:10:38.510385   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:10:38.558266   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:10:38.578103   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:10:38.597128   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:10:38.639782   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:10:38.846423   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:10:39.030530   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:10:39.030651   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:10:39.083703   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:39.285178   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:10:41.823618   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5384347s)
	I0528 14:10:41.836324   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:10:41.875517   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:10:41.911538   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:10:42.144709   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:10:42.372858   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:42.579693   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:10:42.623054   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:10:42.660193   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:42.870380   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:10:42.987952   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:10:43.002467   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:10:43.012232   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:10:43.024142   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:10:43.043222   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:10:43.108797   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:10:43.120059   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:10:43.163713   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:10:43.203812   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:10:43.206241   14040 out.go:177]   - env NO_PROXY=172.18.216.124
	I0528 14:10:43.209076   14040 out.go:177]   - env NO_PROXY=172.18.216.124,172.18.211.44
	I0528 14:10:43.210715   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:10:43.216637   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:10:43.216756   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:10:43.216756   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:10:43.216827   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:10:43.220198   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:10:43.220198   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:10:43.234025   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:10:43.240713   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:10:43.270268   14040 mustload.go:65] Loading cluster: ha-941700
	I0528 14:10:43.270784   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:10:43.271676   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:45.480601   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:45.480728   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:45.480728   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:10:45.481418   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.222.186
	I0528 14:10:45.481418   14040 certs.go:194] generating shared ca certs ...
	I0528 14:10:45.481418   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.482077   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:10:45.482077   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:10:45.482077   14040 certs.go:256] generating profile certs ...
	I0528 14:10:45.482838   14040 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:10:45.483388   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42
	I0528 14:10:45.483660   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.211.44 172.18.222.186 172.18.223.254]
	I0528 14:10:45.575472   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 ...
	I0528 14:10:45.575472   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42: {Name:mk80dd9cb1fecbbe16ab6061edd6aa6ccaf0f3dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.577535   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42 ...
	I0528 14:10:45.577535   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42: {Name:mk3053e4efc73b34b3dee884f9a337bd870325c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.578147   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:10:45.598789   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:10:45.600311   14040 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:10:45.600311   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:10:45.601299   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:10:45.601485   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:10:45.601731   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:10:45.602541   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:10:45.602601   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:10:45.602601   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:10:45.602601   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:10:45.603453   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:10:45.603453   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:10:45.604359   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:10:45.604668   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:45.604843   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:10:45.604843   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:10:45.604843   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:47.913881   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:47.913881   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:47.913970   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:50.657939   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:10:50.657939   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:50.659271   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:10:50.767503   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 14:10:50.775661   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 14:10:50.815145   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 14:10:50.822247   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0528 14:10:50.858198   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 14:10:50.865802   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 14:10:50.910123   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 14:10:50.918502   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0528 14:10:50.955979   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 14:10:50.963263   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 14:10:50.998151   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 14:10:51.007586   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0528 14:10:51.033245   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:10:51.085291   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:10:51.140192   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:10:51.197706   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:10:51.249704   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0528 14:10:51.297669   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:10:51.345931   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:10:51.394164   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:10:51.442278   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:10:51.491703   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:10:51.540828   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:10:51.588409   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 14:10:51.620626   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0528 14:10:51.652109   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 14:10:51.684053   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0528 14:10:51.716676   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 14:10:51.748953   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0528 14:10:51.780608   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 14:10:51.827385   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:10:51.849803   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:10:51.885467   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.892711   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.905580   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.928874   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:10:51.963888   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:10:51.997510   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.007360   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.021635   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.044815   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:10:52.080016   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:10:52.115599   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.122722   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.134614   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.158327   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:10:52.194034   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:10:52.202404   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:10:52.202404   14040 kubeadm.go:928] updating node {m03 172.18.222.186 8443 v1.30.1 docker true true} ...
	I0528 14:10:52.202404   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.222.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:10:52.203001   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:10:52.216049   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:10:52.249679   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:10:52.249832   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:10:52.261884   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:10:52.281163   14040 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 14:10:52.298176   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0528 14:10:52.317249   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:10:52.317249   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:10:52.333951   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:10:52.333951   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:10:52.335828   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:10:52.341481   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 14:10:52.341481   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 14:10:52.342734   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 14:10:52.342734   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 14:10:52.376727   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:10:52.391477   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:10:52.459721   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 14:10:52.460024   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 14:10:53.670762   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 14:10:53.694732   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0528 14:10:53.732373   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:10:53.764612   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 14:10:53.809669   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:10:53.816209   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:10:53.857067   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:54.063162   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:10:54.095410   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:10:54.096288   14040 start.go:316] joinCluster: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:10:54.096319   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 14:10:54.096319   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:56.322618   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:56.322618   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:56.322978   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:59.093663   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:10:59.093663   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:59.093896   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:10:59.304969   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2086388s)
	I0528 14:10:59.304969   14040 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:10:59.304969   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 80b2w8.hbne8fn0cjnhnf5r --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m03 --control-plane --apiserver-advertise-address=172.18.222.186 --apiserver-bind-port=8443"
	I0528 14:11:46.419428   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 80b2w8.hbne8fn0cjnhnf5r --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m03 --control-plane --apiserver-advertise-address=172.18.222.186 --apiserver-bind-port=8443": (47.1143602s)
	I0528 14:11:46.419428   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 14:11:47.203711   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700-m03 minikube.k8s.io/updated_at=2024_05_28T14_11_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=false
	I0528 14:11:47.382287   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-941700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 14:11:47.532104   14040 start.go:318] duration metric: took 53.4357045s to joinCluster
	I0528 14:11:47.532355   14040 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:11:47.534475   14040 out.go:177] * Verifying Kubernetes components...
	I0528 14:11:47.532581   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:11:47.553161   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:11:47.990467   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:11:48.029062   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:11:48.029062   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 14:11:48.029062   14040 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.223.254:8443 with https://172.18.216.124:8443
	I0528 14:11:48.031083   14040 node_ready.go:35] waiting up to 6m0s for node "ha-941700-m03" to be "Ready" ...
	I0528 14:11:48.031214   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:48.031214   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:48.031214   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:48.031214   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:48.045609   14040 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0528 14:11:48.535859   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:48.535859   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:48.535859   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:48.535859   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:48.553739   14040 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0528 14:11:49.044775   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:49.044775   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:49.044775   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:49.044775   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:49.051459   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:11:49.536242   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:49.536242   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:49.536242   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:49.536242   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:49.542975   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:11:50.037502   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:50.037502   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:50.037502   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:50.037502   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:50.045133   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:11:50.045903   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:50.541887   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:50.541887   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:50.541887   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:50.541887   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:50.553261   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:11:51.034885   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:51.034885   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:51.034885   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:51.034885   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:51.039554   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:51.540665   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:51.540665   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:51.540665   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:51.540665   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:51.688248   14040 round_trippers.go:574] Response Status: 200 OK in 147 milliseconds
	I0528 14:11:52.034872   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:52.034872   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:52.034872   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:52.034872   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:52.040823   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:52.536051   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:52.536051   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:52.536051   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:52.536153   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:52.563510   14040 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0528 14:11:52.564576   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:53.041417   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:53.041499   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:53.041499   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:53.041499   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:53.045924   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:53.531738   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:53.531955   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:53.531955   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:53.531955   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:53.536888   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:54.038486   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:54.038486   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:54.038486   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:54.038486   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:54.043102   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:54.534745   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:54.534745   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:54.534745   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:54.534745   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:54.540306   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:55.035907   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:55.035907   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:55.035907   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:55.035907   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:55.041538   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:55.042335   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:55.538860   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:55.539055   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:55.539055   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:55.539055   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:55.544656   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:56.041032   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:56.041289   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:56.041289   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:56.041289   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:56.045662   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:56.545325   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:56.545637   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:56.545637   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:56.545637   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:56.553090   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:11:57.042251   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:57.042329   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:57.042329   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:57.042329   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:57.047601   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:57.048718   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:57.541691   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:57.541691   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:57.541691   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:57.541802   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:57.546506   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:58.031776   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:58.032130   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:58.032130   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:58.032130   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.117053   14040 round_trippers.go:574] Response Status: 200 OK in 9084 milliseconds
	I0528 14:12:07.125417   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:12:07.125417   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.125417   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.125417   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.125417   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.184201   14040 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I0528 14:12:07.185493   14040 node_ready.go:49] node "ha-941700-m03" has status "Ready":"True"
	I0528 14:12:07.185548   14040 node_ready.go:38] duration metric: took 19.1543708s for node "ha-941700-m03" to be "Ready" ...
	I0528 14:12:07.185639   14040 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:12:07.185696   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:07.185696   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.185696   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.185696   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.255235   14040 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0528 14:12:07.266538   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.266538   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8prdw
	I0528 14:12:07.266538   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.266538   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.266538   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.285610   14040 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0528 14:12:07.287670   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.287728   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.287728   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.287728   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.320544   14040 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0528 14:12:07.321619   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.321619   14040 pod_ready.go:81] duration metric: took 55.0811ms for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.321619   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.321758   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kmzqb
	I0528 14:12:07.321908   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.321908   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.321908   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.329927   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:12:07.330869   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.330955   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.331019   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.331019   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.335126   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:07.336132   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.336132   14040 pod_ready.go:81] duration metric: took 14.4209ms for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.336132   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.336132   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700
	I0528 14:12:07.336132   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.336132   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.336132   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.352316   14040 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0528 14:12:07.355029   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.355145   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.355204   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.355204   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.363406   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:12:07.364411   14040 pod_ready.go:92] pod "etcd-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.364411   14040 pod_ready.go:81] duration metric: took 28.2795ms for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.364411   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.364411   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:12:07.364411   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.364411   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.364411   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.371131   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:07.372886   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:07.372886   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.372886   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.372886   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.376930   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:07.378681   14040 pod_ready.go:92] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.378766   14040 pod_ready.go:81] duration metric: took 14.2701ms for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.378766   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.378824   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m03
	I0528 14:12:07.378890   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.378890   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.378890   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.384092   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:07.533925   14040 request.go:629] Waited for 148.5561ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.533987   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.533987   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.533987   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.533987   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.540217   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:07.541992   14040 pod_ready.go:92] pod "etcd-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.541992   14040 pod_ready.go:81] duration metric: took 163.2252ms for pod "etcd-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.541992   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.737071   14040 request.go:629] Waited for 194.9037ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:12:07.737308   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:12:07.737381   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.737381   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.737461   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.745390   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:07.926364   14040 request.go:629] Waited for 179.4381ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.926479   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.926479   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.926479   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.926479   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.931637   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:07.932326   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.932326   14040 pod_ready.go:81] duration metric: took 390.3332ms for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.932326   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.129401   14040 request.go:629] Waited for 197.0745ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:12:08.129626   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:12:08.129626   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.129718   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.129718   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.135126   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:08.334541   14040 request.go:629] Waited for 198.3373ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:08.334890   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:08.335006   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.335006   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.335006   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.340224   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:08.340984   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:08.340984   14040 pod_ready.go:81] duration metric: took 408.657ms for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.341581   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.539548   14040 request.go:629] Waited for 197.6541ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m03
	I0528 14:12:08.539740   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m03
	I0528 14:12:08.539740   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.539740   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.539843   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.547350   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:08.728678   14040 request.go:629] Waited for 179.0175ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:08.729032   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:08.729032   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.729032   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.729032   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.734213   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:08.734891   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:08.734949   14040 pod_ready.go:81] duration metric: took 393.3666ms for pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.734949   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.933323   14040 request.go:629] Waited for 198.0368ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:12:08.933504   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:12:08.933504   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.933504   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.933504   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.939531   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.137030   14040 request.go:629] Waited for 195.9391ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:09.137156   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:09.137156   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.137184   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.137184   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.141811   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:09.143290   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.143290   14040 pod_ready.go:81] duration metric: took 408.3407ms for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.143290   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.341149   14040 request.go:629] Waited for 197.6612ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:12:09.341313   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:12:09.341313   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.341313   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.341313   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.347203   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.528596   14040 request.go:629] Waited for 179.3817ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:09.528596   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:09.528596   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.528596   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.528596   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.534262   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.535775   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.535775   14040 pod_ready.go:81] duration metric: took 392.4836ms for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.535775   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.734653   14040 request.go:629] Waited for 198.6867ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m03
	I0528 14:12:09.734653   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m03
	I0528 14:12:09.734898   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.734898   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.734898   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.739941   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.937627   14040 request.go:629] Waited for 195.7272ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:09.937842   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:09.937934   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.937934   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.937934   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.943548   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:09.944100   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.944230   14040 pod_ready.go:81] duration metric: took 408.3244ms for pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.944230   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcrq6" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.125831   14040 request.go:629] Waited for 181.5347ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcrq6
	I0528 14:12:10.125831   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcrq6
	I0528 14:12:10.125831   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.125831   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.125831   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.133255   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:10.331222   14040 request.go:629] Waited for 196.6116ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:10.331573   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:10.331573   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.331638   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.331638   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.341278   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:12:10.341991   14040 pod_ready.go:92] pod "kube-proxy-dcrq6" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:10.342120   14040 pod_ready.go:81] duration metric: took 397.8887ms for pod "kube-proxy-dcrq6" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.342120   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.534582   14040 request.go:629] Waited for 192.3229ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:12:10.534888   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:12:10.534888   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.534954   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.534954   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.540937   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:10.736462   14040 request.go:629] Waited for 194.4538ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:10.736724   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:10.736724   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.736788   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.736788   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.742129   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:10.744144   14040 pod_ready.go:92] pod "kube-proxy-hrwn5" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:10.744144   14040 pod_ready.go:81] duration metric: took 402.0235ms for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.744251   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.925832   14040 request.go:629] Waited for 181.3764ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:12:10.925980   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:12:10.925980   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.925980   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.925980   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.931779   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.128697   14040 request.go:629] Waited for 195.8042ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.128950   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.129015   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.129076   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.129076   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.134195   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.135117   14040 pod_ready.go:92] pod "kube-proxy-spzwh" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.135117   14040 pod_ready.go:81] duration metric: took 390.8657ms for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.135117   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.331020   14040 request.go:629] Waited for 195.5592ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:12:11.331129   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:12:11.331129   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.331129   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.331129   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.337498   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.535769   14040 request.go:629] Waited for 197.5111ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:11.535769   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:11.535769   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.535769   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.535769   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.542028   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:11.543285   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.543285   14040 pod_ready.go:81] duration metric: took 408.1672ms for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.543285   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.737741   14040 request.go:629] Waited for 194.2819ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:12:11.737955   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:12:11.737955   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.737955   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.738048   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.742817   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:11.940686   14040 request.go:629] Waited for 195.9707ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.940686   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.940686   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.940686   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.940686   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.946254   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.947406   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.947472   14040 pod_ready.go:81] duration metric: took 404.1862ms for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.947472   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:12.126585   14040 request.go:629] Waited for 178.8135ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m03
	I0528 14:12:12.126683   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m03
	I0528 14:12:12.126683   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.126683   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.126769   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.131238   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:12.330331   14040 request.go:629] Waited for 197.3557ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:12.330557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:12.330557   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.330557   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.330557   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.336050   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:12.337061   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:12.337120   14040 pod_ready.go:81] duration metric: took 389.6467ms for pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:12.337186   14040 pod_ready.go:38] duration metric: took 5.1515363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:12:12.337271   14040 api_server.go:52] waiting for apiserver process to appear ...
	I0528 14:12:12.348972   14040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 14:12:12.381471   14040 api_server.go:72] duration metric: took 24.8490637s to wait for apiserver process to appear ...
	I0528 14:12:12.381538   14040 api_server.go:88] waiting for apiserver healthz status ...
	I0528 14:12:12.381538   14040 api_server.go:253] Checking apiserver healthz at https://172.18.216.124:8443/healthz ...
	I0528 14:12:12.392167   14040 api_server.go:279] https://172.18.216.124:8443/healthz returned 200:
	ok
	I0528 14:12:12.392910   14040 round_trippers.go:463] GET https://172.18.216.124:8443/version
	I0528 14:12:12.392910   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.393025   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.393025   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.394363   14040 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 14:12:12.395346   14040 api_server.go:141] control plane version: v1.30.1
	I0528 14:12:12.395346   14040 api_server.go:131] duration metric: took 13.8084ms to wait for apiserver health ...
	I0528 14:12:12.395346   14040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 14:12:12.531971   14040 request.go:629] Waited for 136.514ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.532188   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.532188   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.532301   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.532301   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.546358   14040 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0528 14:12:12.562536   14040 system_pods.go:59] 24 kube-system pods found
	I0528 14:12:12.562536   14040 system_pods.go:61] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700-m03" [b4e953b9-a22c-4c90-a52d-243d7cfd83f9] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-6jgcj" [7f58e46d-d3dd-4975-b817-78111c22997c] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700-m03" [7fba6735-c3de-4f9e-807e-2f47a6b9f4a6] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m03" [33718de3-4958-448b-be70-b6a049144c09] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-proxy-dcrq6" [a4393658-9e64-4987-9075-00b9be537c3d] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:12:12.562921   14040 system_pods.go:61] "kube-scheduler-ha-941700-m03" [2f21f70c-3b6b-4d06-a663-219b78219ab9] Running
	I0528 14:12:12.562945   14040 system_pods.go:61] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:12:12.562945   14040 system_pods.go:61] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:12:12.562970   14040 system_pods.go:61] "kube-vip-ha-941700-m03" [9e49b9d0-1b80-483f-8f7f-543775fdab48] Running
	I0528 14:12:12.562970   14040 system_pods.go:61] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:12:12.562970   14040 system_pods.go:74] duration metric: took 167.6232ms to wait for pod list to return data ...
	I0528 14:12:12.562970   14040 default_sa.go:34] waiting for default service account to be created ...
	I0528 14:12:12.734527   14040 request.go:629] Waited for 171.3495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:12:12.734786   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:12:12.734862   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.734862   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.734918   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.740542   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:12.741251   14040 default_sa.go:45] found service account: "default"
	I0528 14:12:12.741251   14040 default_sa.go:55] duration metric: took 178.2806ms for default service account to be created ...
	I0528 14:12:12.741251   14040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 14:12:12.936575   14040 request.go:629] Waited for 195.0904ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.936759   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.936759   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.936759   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.936759   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.946856   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:12:12.958404   14040 system_pods.go:86] 24 kube-system pods found
	I0528 14:12:12.958404   14040 system_pods.go:89] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700-m03" [b4e953b9-a22c-4c90-a52d-243d7cfd83f9] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-6jgcj" [7f58e46d-d3dd-4975-b817-78111c22997c] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700-m03" [7fba6735-c3de-4f9e-807e-2f47a6b9f4a6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m03" [33718de3-4958-448b-be70-b6a049144c09] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-dcrq6" [a4393658-9e64-4987-9075-00b9be537c3d] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700-m03" [2f21f70c-3b6b-4d06-a663-219b78219ab9] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700-m03" [9e49b9d0-1b80-483f-8f7f-543775fdab48] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:12:12.958404   14040 system_pods.go:126] duration metric: took 217.1529ms to wait for k8s-apps to be running ...
	I0528 14:12:12.958928   14040 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 14:12:12.970752   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:12:13.001254   14040 system_svc.go:56] duration metric: took 42.8495ms WaitForService to wait for kubelet
	I0528 14:12:13.001325   14040 kubeadm.go:576] duration metric: took 25.4689166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:12:13.001325   14040 node_conditions.go:102] verifying NodePressure condition ...
	I0528 14:12:13.138507   14040 request.go:629] Waited for 137.0428ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes
	I0528 14:12:13.138507   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes
	I0528 14:12:13.138507   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:13.138507   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:13.138729   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:13.146717   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:105] duration metric: took 147.8473ms to run NodePressure ...
	I0528 14:12:13.149317   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:12:13.149349   14040 start.go:254] writing updated cluster config ...
	I0528 14:12:13.163447   14040 ssh_runner.go:195] Run: rm -f paused
	I0528 14:12:13.311138   14040 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 14:12:13.316040   14040 out.go:177] * Done! kubectl is now configured to use "ha-941700" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 21:04:13 ha-941700 dockerd[1330]: time="2024-05-28T21:04:13.600000963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:13 ha-941700 dockerd[1330]: time="2024-05-28T21:04:13.600322775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:13 ha-941700 cri-dockerd[1231]: time="2024-05-28T21:04:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f5c08e7fa1a996277727acfb31560bf7e4c5ae58db067fd99734635b7628979b/resolv.conf as [nameserver 172.18.208.1]"
	May 28 21:04:13 ha-941700 cri-dockerd[1231]: time="2024-05-28T21:04:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fce36137fb0f2f3e3387c11e59118ec4365cb9c852d540428bd085db184eaee/resolv.conf as [nameserver 172.18.208.1]"
	May 28 21:04:13 ha-941700 cri-dockerd[1231]: time="2024-05-28T21:04:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf302748dab42213942e3c5b809f6cf49a517a7a7d63f7499d9f9c3822c29f62/resolv.conf as [nameserver 172.18.208.1]"
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.045127285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.045264191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.045279091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.045529301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.223303766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.224053094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.224321405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.225773459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241011330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241482948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241670855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.242036869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209486199Z" level=info msg="shim disconnected" id=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209586001Z" level=warning msg="cleaning up after shim disconnected" id=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209601801Z" level=info msg="cleaning up dead shim" namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1324]: time="2024-05-28T21:12:07.210634620Z" level=info msg="ignoring event" container=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.547789729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548141636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548170536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548355540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	81c63fae6c15c       38af8ddebf499                                                                                       About a minute ago   Running             kube-vip                  1                   9796e3164ede7       kube-vip-ha-941700
	4c5c453eb0b63       cbb01a7bd410d                                                                                       9 minutes ago        Running             coredns                   0                   bf302748dab42       coredns-7db6d8ff4d-8prdw
	980d85de8140e       cbb01a7bd410d                                                                                       9 minutes ago        Running             coredns                   0                   7fce36137fb0f       coredns-7db6d8ff4d-kmzqb
	eb75bacafe16d       6e38f40d628db                                                                                       9 minutes ago        Running             storage-provisioner       0                   f5c08e7fa1a99       storage-provisioner
	e136369b3a3f9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8            9 minutes ago        Running             kindnet-cni               0                   7ab827b080443       kindnet-xrp24
	2dc916ecd7fb9       747097150317f                                                                                       9 minutes ago        Running             kube-proxy                0                   547af2f01c379       kube-proxy-hrwn5
	1875bf92b8110       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   10 minutes ago       Exited              kube-vip                  0                   9796e3164ede7       kube-vip-ha-941700
	82481386243ca       a52dc94f0a912                                                                                       10 minutes ago       Running             kube-scheduler            0                   2035d37daac72       kube-scheduler-ha-941700
	c9264f5e7db1c       91be940803172                                                                                       10 minutes ago       Running             kube-apiserver            0                   8fa7440bbfb1e       kube-apiserver-ha-941700
	2d44d675f6ae3       25a1387cdab82                                                                                       10 minutes ago       Running             kube-controller-manager   0                   5e4b184ad50ca       kube-controller-manager-ha-941700
	0da681cf1666d       3861cfcd7c04c                                                                                       10 minutes ago       Running             etcd                      0                   d3e8f4f13a6d2       etcd-ha-941700
	
	
	==> coredns [4c5c453eb0b6] <==
	[INFO] 10.244.1.2:49307 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002915721s
	[INFO] 10.244.1.2:55702 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001141s
	[INFO] 10.244.2.3:47815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120801s
	[INFO] 10.244.2.3:52357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174801s
	[INFO] 10.244.2.3:60150 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000257702s
	[INFO] 10.244.2.3:38762 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000557s
	[INFO] 10.244.2.3:46638 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104101s
	[INFO] 10.244.2.2:35468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000175201s
	[INFO] 10.244.2.2:52084 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202501s
	[INFO] 10.244.2.2:46549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086801s
	[INFO] 10.244.1.2:39482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183202s
	[INFO] 10.244.1.2:48208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001022s
	[INFO] 10.244.2.3:58380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125801s
	[INFO] 10.244.2.3:47791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000617s
	[INFO] 10.244.2.3:58446 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075801s
	[INFO] 10.244.2.2:51786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205401s
	[INFO] 10.244.2.2:46402 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000668s
	[INFO] 10.244.2.2:46162 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000527s
	[INFO] 10.244.1.2:47633 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222302s
	[INFO] 10.244.2.3:41106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190501s
	[INFO] 10.244.2.3:49018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000401002s
	[INFO] 10.244.2.3:35462 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000984s
	[INFO] 10.244.2.3:33923 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205101s
	[INFO] 10.244.2.2:53789 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156201s
	[INFO] 10.244.2.2:38081 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000623s
	
	
	==> coredns [980d85de8140] <==
	[INFO] 127.0.0.1:55048 - 14113 "HINFO IN 4658628224283164909.4753887424590601170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080042402s
	[INFO] 10.244.1.2:45997 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040510412s
	[INFO] 10.244.2.3:36534 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000073101s
	[INFO] 10.244.2.2:53757 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000076601s
	[INFO] 10.244.1.2:36086 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250802s
	[INFO] 10.244.1.2:41780 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197201s
	[INFO] 10.244.1.2:33788 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129501s
	[INFO] 10.244.1.2:48691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159901s
	[INFO] 10.244.2.3:35600 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002879521s
	[INFO] 10.244.2.3:38955 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000275302s
	[INFO] 10.244.2.3:56484 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000708s
	[INFO] 10.244.2.2:53277 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156101s
	[INFO] 10.244.2.2:56556 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275302s
	[INFO] 10.244.2.2:48986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014579306s
	[INFO] 10.244.2.2:43791 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072501s
	[INFO] 10.244.2.2:59645 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157801s
	[INFO] 10.244.1.2:58983 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139201s
	[INFO] 10.244.1.2:50857 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151501s
	[INFO] 10.244.2.3:46176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124501s
	[INFO] 10.244.2.2:46419 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079101s
	[INFO] 10.244.1.2:46653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267202s
	[INFO] 10.244.1.2:35565 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127701s
	[INFO] 10.244.1.2:54940 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131901s
	[INFO] 10.244.2.2:39234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235201s
	[INFO] 10.244.2.2:56420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263301s
	
	
	==> describe nodes <==
	Name:               ha-941700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T14_03_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:03:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:13:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:13:02 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:13:02 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:13:02 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:13:02 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.216.124
	  Hostname:    ha-941700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f45878e3b64766bcabad456d3d63aa
	  System UUID:                5e2bb58e-529b-744d-ab43-67cbd66cc2ae
	  Boot ID:                    b7a84e0f-a48d-4650-8123-ce9427b898d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8prdw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m58s
	  kube-system                 coredns-7db6d8ff4d-kmzqb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m58s
	  kube-system                 etcd-ha-941700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-xrp24                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m58s
	  kube-system                 kube-apiserver-ha-941700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-941700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-hrwn5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-scheduler-ha-941700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-941700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m57s                kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m59s                node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  RegisteredNode           5m55s                node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  RegisteredNode           114s                 node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  NodeNotReady             74s                  node-controller  Node ha-941700 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 10m)    kubelet          Node ha-941700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 10m)    kubelet          Node ha-941700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 10m)    kubelet          Node ha-941700 status is now: NodeHasSufficientPID
	  Normal  NodeReady                59s (x2 over 9m49s)  kubelet          Node ha-941700 status is now: NodeReady
	
	
	Name:               ha-941700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T14_07_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:13:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:13:22 +0000   Tue, 28 May 2024 21:07:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:13:22 +0000   Tue, 28 May 2024 21:07:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:13:22 +0000   Tue, 28 May 2024 21:07:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:13:22 +0000   Tue, 28 May 2024 21:07:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.211.44
	  Hostname:    ha-941700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c30279926374dd3bd8f665d217f3bc0
	  System UUID:                2ad5eb6b-910f-c94b-929c-5d11f4308df3
	  Boot ID:                    c87e1b84-e6c5-4f4f-b7a2-9d6bcfe12067
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qnh86                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-941700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m16s
	  kube-system                 kindnet-whc47                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	  kube-system                 kube-apiserver-ha-941700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-controller-manager-ha-941700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-proxy-spzwh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-ha-941700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-vip-ha-941700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m17s (x2 over 6m17s)  kubelet          Node ha-941700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x2 over 6m17s)  kubelet          Node ha-941700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x2 over 6m17s)  kubelet          Node ha-941700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node ha-941700-m02 status is now: NodeReady
	  Normal  RegisteredNode           5m55s                  node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	  Normal  RegisteredNode           114s                   node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	
	
	Name:               ha-941700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T14_11_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:11:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:13:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:13:19 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:13:19 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:13:19 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:13:19 +0000   Tue, 28 May 2024 21:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.222.186
	  Hostname:    ha-941700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2485f1b4ca7a4f19afcda00949d8cd5c
	  System UUID:                5a5be4cd-23e0-cb46-9f05-2d08965906d1
	  Boot ID:                    1f46f7a2-aeb7-4b43-a116-0ff9dc1ac984
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s2cwz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  default                     busybox-fc5497c4f-xbkdf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 etcd-ha-941700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m19s
	  kube-system                 kindnet-6jgcj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m21s
	  kube-system                 kube-apiserver-ha-941700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-controller-manager-ha-941700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-proxy-dcrq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-scheduler-ha-941700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-vip-ha-941700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node ha-941700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node ha-941700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node ha-941700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	  Normal  RegisteredNode           2m19s                  node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	  Normal  RegisteredNode           114s                   node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	
	
	==> dmesg <==
	[  +6.685793] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 21:02] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.179611] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[May28 21:03] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.109259] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.552972] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.187720] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.241288] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.828087] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.183973] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.211170] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.289907] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[ +11.485671] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.110944] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.506063] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +5.919021] systemd-fstab-generator[1711]: Ignoring "noauto" option for root device
	[  +0.107295] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.816139] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.735004] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[May28 21:04] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.685830] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.697655] kauditd_printk_skb: 33 callbacks suppressed
	[May28 21:06] hrtimer: interrupt took 10078045 ns
	
	
	==> etcd [0da681cf1666] <==
	{"level":"info","ts":"2024-05-28T21:12:07.108938Z","caller":"traceutil/trace.go:171","msg":"trace[791563372] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"6.095421355s","start":"2024-05-28T21:12:01.0135Z","end":"2024-05-28T21:12:07.108922Z","steps":["trace[791563372] 'process raft request'  (duration: 6.093089412s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.110095Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:12:01.01348Z","time spent":"6.095586358s","remote":"127.0.0.1:33598","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-941700-m03\" mod_revision:1549 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-941700-m03\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-941700-m03\" > >"}
	{"level":"info","ts":"2024-05-28T21:12:07.113119Z","caller":"traceutil/trace.go:171","msg":"trace[1573835121] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"4.860080442s","start":"2024-05-28T21:12:02.253023Z","end":"2024-05-28T21:12:07.113104Z","steps":["trace[1573835121] 'process raft request'  (duration: 4.85454034s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.113725Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:12:02.252971Z","time spent":"4.86048845s","remote":"127.0.0.1:33432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":784,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-ha-941700-m02.17d3c3652bf8cb20\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-ha-941700-m02.17d3c3652bf8cb20\" value_size:692 lease:7811370137857244565 >> failure:<>"}
	{"level":"info","ts":"2024-05-28T21:12:07.114331Z","caller":"traceutil/trace.go:171","msg":"trace[1660293986] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"3.984788483s","start":"2024-05-28T21:12:03.129527Z","end":"2024-05-28T21:12:07.114315Z","steps":["trace[1660293986] 'process raft request'  (duration: 3.978071559s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.11549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:12:03.129508Z","time spent":"3.985952905s","remote":"127.0.0.1:33432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":784,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-ha-941700-m03.17d3c36560491938\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-ha-941700-m03.17d3c36560491938\" value_size:692 lease:7811370137857244565 >> failure:<>"}
	{"level":"warn","ts":"2024-05-28T21:12:07.114697Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:12:00.860456Z","time spent":"6.246232593s","remote":"127.0.0.1:33432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-apiserver-ha-941700.17d3c364d871e736\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-apiserver-ha-941700.17d3c364d871e736\" value_size:675 lease:7811370137857244565 >> failure:<>"}
	{"level":"info","ts":"2024-05-28T21:12:07.137146Z","caller":"traceutil/trace.go:171","msg":"trace[570939648] transaction","detail":"{read_only:false; response_revision:1581; number_of_response:1; }","duration":"123.910382ms","start":"2024-05-28T21:12:07.013218Z","end":"2024-05-28T21:12:07.137128Z","steps":["trace[570939648] 'process raft request'  (duration: 123.540675ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:12:07.15372Z","caller":"traceutil/trace.go:171","msg":"trace[233838363] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1581; }","duration":"138.897658ms","start":"2024-05-28T21:12:07.014804Z","end":"2024-05-28T21:12:07.153702Z","steps":["trace[233838363] 'process raft request'  (duration: 122.316653ms)","trace[233838363] 'compare'  (duration: 16.3278ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:12:07.154196Z","caller":"traceutil/trace.go:171","msg":"trace[800031652] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"129.202579ms","start":"2024-05-28T21:12:07.024984Z","end":"2024-05-28T21:12:07.154187Z","steps":["trace[800031652] 'process raft request'  (duration: 128.514166ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.154643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.62739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:13448"}
	{"level":"info","ts":"2024-05-28T21:12:07.154715Z","caller":"traceutil/trace.go:171","msg":"trace[436938673] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:1585; }","duration":"140.723591ms","start":"2024-05-28T21:12:07.013981Z","end":"2024-05-28T21:12:07.154705Z","steps":["trace[436938673] 'agreement among raft nodes before linearized reading'  (duration: 140.512188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.154881Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.18979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:12:07.154941Z","caller":"traceutil/trace.go:171","msg":"trace[392229320] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1585; }","duration":"135.302892ms","start":"2024-05-28T21:12:07.01963Z","end":"2024-05-28T21:12:07.154933Z","steps":["trace[392229320] 'agreement among raft nodes before linearized reading'  (duration: 135.19109ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.15506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.007905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:12:07.155108Z","caller":"traceutil/trace.go:171","msg":"trace[74836922] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1585; }","duration":"136.059006ms","start":"2024-05-28T21:12:07.019041Z","end":"2024-05-28T21:12:07.155101Z","steps":["trace[74836922] 'agreement among raft nodes before linearized reading'  (duration: 135.992105ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:12:07.154038Z","caller":"traceutil/trace.go:171","msg":"trace[247433463] transaction","detail":"{read_only:false; response_revision:1583; number_of_response:1; }","duration":"119.672604ms","start":"2024-05-28T21:12:07.03435Z","end":"2024-05-28T21:12:07.154023Z","steps":["trace[247433463] 'process raft request'  (duration: 119.221296ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:12:07.159352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.104509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:12:07.159428Z","caller":"traceutil/trace.go:171","msg":"trace[42013657] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:1585; }","duration":"147.272412ms","start":"2024-05-28T21:12:07.012147Z","end":"2024-05-28T21:12:07.159419Z","steps":["trace[42013657] 'agreement among raft nodes before linearized reading'  (duration: 147.108409ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:12:48.08115Z","caller":"traceutil/trace.go:171","msg":"trace[1808587054] transaction","detail":"{read_only:false; response_revision:1713; number_of_response:1; }","duration":"104.655551ms","start":"2024-05-28T21:12:47.976483Z","end":"2024-05-28T21:12:48.081138Z","steps":["trace[1808587054] 'process raft request'  (duration: 104.034445ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:12:48.083211Z","caller":"traceutil/trace.go:171","msg":"trace[1992034361] transaction","detail":"{read_only:false; response_revision:1714; number_of_response:1; }","duration":"101.052414ms","start":"2024-05-28T21:12:47.982149Z","end":"2024-05-28T21:12:48.083201Z","steps":["trace[1992034361] 'process raft request'  (duration: 98.66859ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:12:48.080734Z","caller":"traceutil/trace.go:171","msg":"trace[2121600387] transaction","detail":"{read_only:false; response_revision:1712; number_of_response:1; }","duration":"104.313248ms","start":"2024-05-28T21:12:47.976404Z","end":"2024-05-28T21:12:48.080718Z","steps":["trace[2121600387] 'process raft request'  (duration: 83.135836ms)","trace[2121600387] 'compare'  (duration: 20.601106ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:13:42.59074Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1038}
	{"level":"info","ts":"2024-05-28T21:13:42.697173Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1038,"took":"105.712348ms","hash":2292266318,"current-db-size-bytes":3907584,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2375680,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-05-28T21:13:42.697309Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2292266318,"revision":1038,"compact-revision":-1}
	
	
	==> kernel <==
	 21:14:01 up 12 min,  0 users,  load average: 0.75, 0.54, 0.29
	Linux ha-941700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e136369b3a3f] <==
	I0528 21:13:17.271087       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:13:27.279561       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:13:27.279714       1 main.go:227] handling current node
	I0528 21:13:27.279731       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:13:27.279739       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:27.280275       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:13:27.280316       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:13:37.296710       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:13:37.296800       1 main.go:227] handling current node
	I0528 21:13:37.296815       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:13:37.296823       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:37.297425       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:13:37.297518       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:13:47.305720       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:13:47.305820       1 main.go:227] handling current node
	I0528 21:13:47.305835       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:13:47.305883       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:47.306008       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:13:47.306038       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:13:57.316256       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:13:57.316392       1 main.go:227] handling current node
	I0528 21:13:57.316407       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:13:57.316415       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:57.316541       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:13:57.316555       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c9264f5e7db1] <==
	Trace[436061386]: ---"About to write a response" 7818ms (21:12:07.111)
	Trace[436061386]: [7.823339123s] [7.823339123s] END
	I0528 21:12:07.116156       1 trace.go:236] Trace[1678403227]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2cfca58c-dfe4-4058-814e-ed803fb1a818,client:172.18.222.186,api-group:coordination.k8s.io,api-version:v1,name:ha-941700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-941700-m03,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (28-May-2024 21:12:01.011) (total time: 6104ms):
	Trace[1678403227]: ["GuaranteedUpdate etcd3" audit-id:2cfca58c-dfe4-4058-814e-ed803fb1a818,key:/leases/kube-node-lease/ha-941700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 6104ms (21:12:01.011)
	Trace[1678403227]:  ---"Txn call completed" 6103ms (21:12:07.115)]
	Trace[1678403227]: [6.104674129s] [6.104674129s] END
	I0528 21:12:07.116179       1 trace.go:236] Trace[1079511571]: "Get" accept:application/json, */*,audit-id:3c63f981-8814-49c8-91ed-3d8854da900d,client:172.18.208.1,api-group:,api-version:v1,name:ha-941700-m03,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-941700-m03,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (28-May-2024 21:11:58.028) (total time: 9087ms):
	Trace[1079511571]: ---"About to write a response" 9083ms (21:12:07.111)
	Trace[1079511571]: [9.087559858s] [9.087559858s] END
	I0528 21:12:07.120541       1 trace.go:236] Trace[1782411227]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e1d4dce7-505f-4b5b-928e-067fa8421500,client:172.18.222.186,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:POST (28-May-2024 21:12:03.128) (total time: 3992ms):
	Trace[1782411227]: ["Create etcd3" audit-id:e1d4dce7-505f-4b5b-928e-067fa8421500,key:/events/kube-system/kube-apiserver-ha-941700-m03.17d3c36560491938,type:*core.Event,resource:events 3991ms (21:12:03.128)
	Trace[1782411227]:  ---"Txn call succeeded" 3991ms (21:12:07.120)]
	Trace[1782411227]: [3.992341324s] [3.992341324s] END
	I0528 21:12:07.121816       1 trace.go:236] Trace[653085208]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:93f18f0b-f2d8-45d0-acd4-a3505949c51b,client:172.18.223.254,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:POST (28-May-2024 21:12:00.855) (total time: 6266ms):
	Trace[653085208]: ["Create etcd3" audit-id:93f18f0b-f2d8-45d0-acd4-a3505949c51b,key:/events/kube-system/kube-apiserver-ha-941700.17d3c364d871e736,type:*core.Event,resource:events 6263ms (21:12:00.857)
	Trace[653085208]:  ---"Txn call succeeded" 6263ms (21:12:07.121)]
	Trace[653085208]: [6.266656279s] [6.266656279s] END
	I0528 21:12:07.166692       1 trace.go:236] Trace[1606402907]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f4479934-50a3-4d1f-8630-5949f97c53eb,client:172.18.222.186,api-group:,api-version:v1,name:ha-941700-m03,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-941700-m03/status,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PATCH (28-May-2024 21:11:58.016) (total time: 9150ms):
	Trace[1606402907]: ["GuaranteedUpdate etcd3" audit-id:f4479934-50a3-4d1f-8630-5949f97c53eb,key:/minions/ha-941700-m03,type:*core.Node,resource:nodes 9150ms (21:11:58.016)
	Trace[1606402907]:  ---"About to Encode" 9016ms (21:12:07.033)
	Trace[1606402907]:  ---"Txn call completed" 132ms (21:12:07.165)]
	Trace[1606402907]: ---"About to check admission control" 9015ms (21:12:07.032)
	Trace[1606402907]: ---"Object stored in database" 133ms (21:12:07.166)
	Trace[1606402907]: [9.150500344s] [9.150500344s] END
	W0528 21:12:07.734511       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.211.44 172.18.216.124 172.18.222.186]
	
	
	==> kube-controller-manager [2d44d675f6ae] <==
	I0528 21:11:40.142161       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-941700-m03" podCIDRs=["10.244.2.0/24"]
	I0528 21:11:42.656989       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-941700-m03"
	I0528 21:12:47.906663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.3343ms"
	I0528 21:12:48.092004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="183.086848ms"
	I0528 21:12:48.094267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.109421ms"
	I0528 21:12:48.377514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.07147ms"
	I0528 21:12:48.377646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.601µs"
	I0528 21:12:57.420011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="286.389231ms"
	I0528 21:12:57.631345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="211.246693ms"
	I0528 21:12:57.667550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.140307ms"
	I0528 21:12:57.668232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.5µs"
	I0528 21:12:57.777904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.09333ms"
	I0528 21:12:57.779051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.1µs"
	I0528 21:12:58.907204       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-8f8hl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-8f8hl\": the object has been modified; please apply your changes to the latest version and try again"
	I0528 21:12:58.910183       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fe138b4a-dbe3-48c6-b1ae-6cb5ab4e5fea", APIVersion:"v1", ResourceVersion:"233", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-8f8hl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-8f8hl": the object has been modified; please apply your changes to the latest version and try again
	I0528 21:12:58.977416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.157526ms"
	I0528 21:12:58.977877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.701µs"
	I0528 21:12:59.049540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.856626ms"
	I0528 21:12:59.050752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.001µs"
	I0528 21:13:00.713957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.25053ms"
	I0528 21:13:00.715295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.601µs"
	I0528 21:13:00.852272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.054801ms"
	I0528 21:13:00.853556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.7µs"
	I0528 21:13:00.951358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.267446ms"
	I0528 21:13:00.951618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.7µs"
	
	
	==> kube-proxy [2dc916ecd7fb] <==
	I0528 21:04:04.492831       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:04:04.516303       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.216.124"]
	I0528 21:04:04.591574       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:04:04.591732       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:04:04.591756       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:04:04.597217       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:04:04.598085       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:04:04.598119       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:04:04.599706       1 config.go:192] "Starting service config controller"
	I0528 21:04:04.599806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:04:04.600227       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:04:04.600257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:04:04.601175       1 config.go:319] "Starting node config controller"
	I0528 21:04:04.601363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:04:04.701534       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:04:04.701596       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:04:04.701611       1 shared_informer.go:320] Caches are synced for service config
	W0528 21:12:51.996363       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0528 21:12:51.996362       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0528 21:12:51.996435       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [82481386243c] <==
	E0528 21:03:46.332028       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:03:46.351278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:03:46.351529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 21:03:46.422939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:03:46.424926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:03:46.464076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:03:46.464323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:03:46.571263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 21:03:46.573121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 21:03:46.646270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 21:03:46.646617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 21:03:46.676236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 21:03:46.676591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 21:03:46.735637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:03:46.735998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0528 21:03:48.313973       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 21:11:40.247560       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vrmgz\": pod kindnet-vrmgz is already assigned to node \"ha-941700-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-vrmgz" node="ha-941700-m03"
	E0528 21:11:40.247673       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vrmgz\": pod kindnet-vrmgz is already assigned to node \"ha-941700-m03\"" pod="kube-system/kindnet-vrmgz"
	I0528 21:12:57.216396       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="7ae7d58b-be7a-4aae-9ef7-0c586e2e5c79" pod="default/busybox-fc5497c4f-qnh86" assumedNode="ha-941700-m02" currentNode="ha-941700-m03"
	E0528 21:12:57.245638       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xbkdf\": pod busybox-fc5497c4f-xbkdf is already assigned to node \"ha-941700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xbkdf" node="ha-941700-m02"
	E0528 21:12:57.245929       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xbkdf\": pod busybox-fc5497c4f-xbkdf is already assigned to node \"ha-941700-m03\"" pod="default/busybox-fc5497c4f-xbkdf"
	E0528 21:12:57.249153       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qnh86\": pod busybox-fc5497c4f-qnh86 is already assigned to node \"ha-941700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qnh86" node="ha-941700-m03"
	E0528 21:12:57.249246       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7ae7d58b-be7a-4aae-9ef7-0c586e2e5c79(default/busybox-fc5497c4f-qnh86) was assumed on ha-941700-m03 but assigned to ha-941700-m02" pod="default/busybox-fc5497c4f-qnh86"
	E0528 21:12:57.249286       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qnh86\": pod busybox-fc5497c4f-qnh86 is already assigned to node \"ha-941700-m02\"" pod="default/busybox-fc5497c4f-qnh86"
	I0528 21:12:57.249384       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qnh86" node="ha-941700-m02"
	
	
	==> kubelet <==
	May 28 21:12:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:12:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:12:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:12:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:12:48 ha-941700 kubelet[2217]: E0528 21:12:48.579018    2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-941700?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
	May 28 21:12:49 ha-941700 kubelet[2217]: E0528 21:12:49.059019    2217 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-941700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-941700?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173164    2217 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173507    2217 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: E0528 21:12:52.173586    2217 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-941700\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-941700?timeout=10s\": http2: client connection lost"
	May 28 21:12:52 ha-941700 kubelet[2217]: E0528 21:12:52.173602    2217 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173642    2217 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173672    2217 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173698    2217 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.173800    2217 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: I0528 21:12:52.173990    2217 status_manager.go:853] "Failed to get status for pod" podUID="91936c0059f76d9a86d5f703b85bd231" pod="kube-system/kube-vip-ha-941700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-941700\": http2: client connection lost"
	May 28 21:12:52 ha-941700 kubelet[2217]: E0528 21:12:52.173368    2217 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-941700.17d3c364efdcfc0a\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-941700.17d3c364efdcfc0a  kube-system   1588 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-941700,UID:9add3ce8e0944387360fdfbc07467240,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-941700,},FirstTimestamp:2024-05-28 21:12:01 +0000 UTC,LastTimestamp:2024-05-28 21:12:07.01130065 +0000 UTC m=+498.666179047,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,R
eportingController:kubelet,ReportingInstance:ha-941700,}"
	May 28 21:12:52 ha-941700 kubelet[2217]: E0528 21:12:52.174982    2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-941700?timeout=10s\": http2: client connection lost" interval="3.2s"
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.175050    2217 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:52 ha-941700 kubelet[2217]: W0528 21:12:52.175088    2217 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	May 28 21:12:53 ha-941700 kubelet[2217]: I0528 21:12:53.469999    2217 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=522.469976877 podStartE2EDuration="8m42.469976877s" podCreationTimestamp="2024-05-28 21:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 21:04:15.538579406 +0000 UTC m=+27.193457803" watchObservedRunningTime="2024-05-28 21:12:53.469976877 +0000 UTC m=+545.124855274"
	May 28 21:13:48 ha-941700 kubelet[2217]: E0528 21:13:48.560552    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:13:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:13:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:13:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:13:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:13:53.268789   11432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-941700 -n ha-941700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-941700 -n ha-941700: (12.7972965s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-941700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (80.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 node stop m02 -v=7 --alsologtostderr
E0528 14:30:06.430509    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 node stop m02 -v=7 --alsologtostderr: (35.3167036s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr: exit status 1 (9.9914445s)

                                                
                                                
** stderr ** 
	W0528 14:30:31.244691   10604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 14:30:31.253531   10604 out.go:291] Setting OutFile to fd 804 ...
	I0528 14:30:31.409168   10604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:30:31.409168   10604 out.go:304] Setting ErrFile to fd 892...
	I0528 14:30:31.409168   10604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:30:31.430004   10604 out.go:298] Setting JSON to false
	I0528 14:30:31.430004   10604 mustload.go:65] Loading cluster: ha-941700
	I0528 14:30:31.430004   10604 notify.go:220] Checking for updates...
	I0528 14:30:31.430481   10604 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:30:31.430481   10604 status.go:255] checking status of ha-941700 ...
	I0528 14:30:31.431230   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:30:33.728239   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:30:33.728239   10604 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:30:33.728239   10604 status.go:330] ha-941700 host status = "Running" (err=<nil>)
	I0528 14:30:33.728239   10604 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:30:33.749831   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:30:36.017499   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:30:36.017499   10604 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:30:36.018538   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:30:38.735380   10604 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:30:38.735380   10604 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:30:38.735380   10604 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:30:38.748244   10604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 14:30:38.748244   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:30:41.058461   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:30:41.058650   10604 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:30:41.058804   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-941700 -n ha-941700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-941700 -n ha-941700: (12.474226s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 logs -n 25: (8.6620425s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:25 PDT | 28 May 24 14:25 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:25 PDT | 28 May 24 14:25 PDT |
	|         | ha-941700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:25 PDT | 28 May 24 14:25 PDT |
	|         | ha-941700:/home/docker/cp-test_ha-941700-m03_ha-941700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:25 PDT | 28 May 24 14:26 PDT |
	|         | ha-941700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700 sudo cat                                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:26 PDT | 28 May 24 14:26 PDT |
	|         | /home/docker/cp-test_ha-941700-m03_ha-941700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:26 PDT | 28 May 24 14:26 PDT |
	|         | ha-941700-m02:/home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:26 PDT | 28 May 24 14:26 PDT |
	|         | ha-941700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700-m02 sudo cat                                                                                   | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:26 PDT | 28 May 24 14:26 PDT |
	|         | /home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:26 PDT | 28 May 24 14:27 PDT |
	|         | ha-941700-m04:/home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:27 PDT |
	|         | ha-941700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700-m04 sudo cat                                                                                   | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:27 PDT |
	|         | /home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-941700 cp testdata\cp-test.txt                                                                                         | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:27 PDT |
	|         | ha-941700-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:27 PDT |
	|         | ha-941700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:27 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:27 PDT | 28 May 24 14:28 PDT |
	|         | ha-941700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:28 PDT | 28 May 24 14:28 PDT |
	|         | ha-941700:/home/docker/cp-test_ha-941700-m04_ha-941700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:28 PDT | 28 May 24 14:28 PDT |
	|         | ha-941700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700 sudo cat                                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:28 PDT | 28 May 24 14:28 PDT |
	|         | /home/docker/cp-test_ha-941700-m04_ha-941700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:28 PDT | 28 May 24 14:28 PDT |
	|         | ha-941700-m02:/home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:28 PDT | 28 May 24 14:29 PDT |
	|         | ha-941700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700-m02 sudo cat                                                                                   | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:29 PDT | 28 May 24 14:29 PDT |
	|         | /home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt                                                                       | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:29 PDT | 28 May 24 14:29 PDT |
	|         | ha-941700-m03:/home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n                                                                                                          | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:29 PDT | 28 May 24 14:29 PDT |
	|         | ha-941700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-941700 ssh -n ha-941700-m03 sudo cat                                                                                   | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:29 PDT | 28 May 24 14:29 PDT |
	|         | /home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-941700 node stop m02 -v=7                                                                                              | ha-941700 | minikube1\jenkins | v1.33.1 | 28 May 24 14:29 PDT | 28 May 24 14:30 PDT |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 14:00:41
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 14:00:41.215665   14040 out.go:291] Setting OutFile to fd 1000 ...
	I0528 14:00:41.216243   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:00:41.216243   14040 out.go:304] Setting ErrFile to fd 980...
	I0528 14:00:41.216243   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 14:00:41.240976   14040 out.go:298] Setting JSON to false
	I0528 14:00:41.243302   14040 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2806,"bootTime":1716927235,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 14:00:41.243302   14040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 14:00:41.248563   14040 out.go:177] * [ha-941700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 14:00:41.252361   14040 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:00:41.252361   14040 notify.go:220] Checking for updates...
	I0528 14:00:41.255220   14040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 14:00:41.258074   14040 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 14:00:41.260263   14040 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 14:00:41.262257   14040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 14:00:41.264970   14040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 14:00:46.671154   14040 out.go:177] * Using the hyperv driver based on user configuration
	I0528 14:00:46.675942   14040 start.go:297] selected driver: hyperv
	I0528 14:00:46.675942   14040 start.go:901] validating driver "hyperv" against <nil>
	I0528 14:00:46.675942   14040 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 14:00:46.724606   14040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 14:00:46.725359   14040 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:00:46.725359   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:00:46.725359   14040 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0528 14:00:46.725359   14040 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 14:00:46.725950   14040 start.go:340] cluster config:
	{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:00:46.726145   14040 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 14:00:46.730432   14040 out.go:177] * Starting "ha-941700" primary control-plane node in "ha-941700" cluster
	I0528 14:00:46.733274   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:00:46.733462   14040 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 14:00:46.733544   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:00:46.733860   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:00:46.733972   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:00:46.734367   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:00:46.734367   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json: {Name:mkf97a72ae148d91fb70226ee26dc84dfdbe9def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:00:46.735707   14040 start.go:360] acquireMachinesLock for ha-941700: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:00:46.735707   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700"
	I0528 14:00:46.736275   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:00:46.736275   14040 start.go:125] createHost starting for "" (driver="hyperv")
	I0528 14:00:46.739364   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:00:46.739364   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:00:46.739364   14040 client.go:168] LocalClient.Create starting
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:00:46.740329   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:00:46.741378   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:00:48.823961   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:00:48.824775   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:48.824880   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:50.614565   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:52.101420   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:00:55.737389   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:00:55.738247   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:55.740868   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:00:56.237651   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:00:56.521678   14040 main.go:141] libmachine: Creating VM...
	I0528 14:00:56.521678   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:00:59.389144   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:00:59.389436   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:00:59.389436   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:00:59.389436   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:01:01.132003   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:01:01.132542   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:01.132599   14040 main.go:141] libmachine: Creating VHD
	I0528 14:01:01.132599   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:01:04.955374   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D3D2ACC5-F92D-4A9E-AF79-B83C6781150C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:01:04.955374   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:04.955478   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:01:04.955478   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:01:04.966129   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:08.176007   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd' -SizeBytes 20000MB
	I0528 14:01:10.786151   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:10.786151   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:10.786939   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-941700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:14.509428   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700 -DynamicMemoryEnabled $false
	I0528 14:01:16.806352   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:16.807198   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:16.807350   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700 -Count 2
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:19.010745   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\boot2docker.iso'
	I0528 14:01:21.669389   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:21.669700   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:21.669888   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\disk.vhd'
	I0528 14:01:24.448965   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:24.449204   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:24.449204   14040 main.go:141] libmachine: Starting VM...
	I0528 14:01:24.449334   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700
	I0528 14:01:27.527216   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:27.527216   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:27.527216   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:01:27.528214   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:29.872069   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:29.872731   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:29.872803   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:32.453011   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:32.453011   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:33.464790   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:35.716038   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:35.716181   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:35.716295   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:38.249807   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:38.249807   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:39.255088   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:41.465340   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:41.465340   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:41.466359   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:43.977879   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:43.977879   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:44.985783   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:47.225376   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:47.225601   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:47.225701   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:49.731096   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:01:49.731096   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:50.744523   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:52.994040   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:52.994040   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:52.994739   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:01:55.544241   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:01:55.545271   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:55.545391   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:57.733778   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:57.733778   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:57.734164   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:01:57.734280   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:01:59.851438   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:01:59.851438   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:01:59.851562   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:02.382291   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:02.382291   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:02.388693   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:02.400838   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:02.400838   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:02:02.523539   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:02:02.523655   14040 buildroot.go:166] provisioning hostname "ha-941700"
	I0528 14:02:02.523721   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:04.688791   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:04.689422   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:04.689478   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:07.241043   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:07.241043   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:07.248562   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:07.249146   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:07.249330   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700 && echo "ha-941700" | sudo tee /etc/hostname
	I0528 14:02:07.391960   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700
	
	I0528 14:02:07.392109   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:09.505992   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:09.506266   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:09.506394   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:12.060897   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:12.060897   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:12.066129   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:12.066963   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:12.066963   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:02:12.205377   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:02:12.205377   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:02:12.205377   14040 buildroot.go:174] setting up certificates
	I0528 14:02:12.205377   14040 provision.go:84] configureAuth start
	I0528 14:02:12.205377   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:14.365189   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:14.366135   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:14.366248   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:16.947810   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:16.948390   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:16.948624   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:19.100669   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:19.100669   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:19.100906   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:21.666381   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:21.666381   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:21.666381   14040 provision.go:143] copyHostCerts
	I0528 14:02:21.667126   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:02:21.667570   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:02:21.667644   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:02:21.668099   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:02:21.669247   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:02:21.669579   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:02:21.669579   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:02:21.669579   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:02:21.671451   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:02:21.671742   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:02:21.671742   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:02:21.672129   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:02:21.673259   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700 san=[127.0.0.1 172.18.216.124 ha-941700 localhost minikube]
	I0528 14:02:21.943414   14040 provision.go:177] copyRemoteCerts
	I0528 14:02:21.956408   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:02:21.956408   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:24.066021   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:24.067070   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:24.067070   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:26.630955   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:26.630955   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:26.630955   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:02:26.743322   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7868073s)
	I0528 14:02:26.743322   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:02:26.743322   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0528 14:02:26.792526   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:02:26.792526   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:02:26.837930   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:02:26.838124   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:02:26.883404   14040 provision.go:87] duration metric: took 14.6779977s to configureAuth
	I0528 14:02:26.883404   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:02:26.884401   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:02:26.884584   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:29.035532   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:29.035532   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:29.036410   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:31.565311   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:31.565494   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:31.571234   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:31.572038   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:31.572038   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:02:31.696668   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:02:31.696668   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:02:31.696668   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:02:31.696668   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:33.835961   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:33.837348   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:33.837456   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:36.364746   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:36.364746   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:36.371283   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:36.372140   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:36.372140   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:02:36.533079   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:02:36.533079   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:38.596976   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:38.596976   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:38.597182   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:41.182753   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:41.182753   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:41.188718   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:41.189361   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:41.189361   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:02:43.297391   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:02:43.297391   14040 machine.go:97] duration metric: took 45.5631359s to provisionDockerMachine
	I0528 14:02:43.297391   14040 client.go:171] duration metric: took 1m56.5577998s to LocalClient.Create
	I0528 14:02:43.297391   14040 start.go:167] duration metric: took 1m56.5577998s to libmachine.API.Create "ha-941700"
	I0528 14:02:43.297391   14040 start.go:293] postStartSetup for "ha-941700" (driver="hyperv")
	I0528 14:02:43.297391   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:02:43.311613   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:02:43.311613   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:45.469126   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:45.469126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:45.470049   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:48.019098   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:48.019098   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:48.019576   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:02:48.126537   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8149147s)
	I0528 14:02:48.139311   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:02:48.146353   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:02:48.146353   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:02:48.146671   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:02:48.147513   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:02:48.147513   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:02:48.159490   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:02:48.178118   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:02:48.230993   14040 start.go:296] duration metric: took 4.9335916s for postStartSetup
	I0528 14:02:48.235922   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:50.376755   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:50.376755   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:50.377480   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:52.895880   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:52.895880   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:52.895880   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:02:52.898951   14040 start.go:128] duration metric: took 2m6.1624287s to createHost
	I0528 14:02:52.898951   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:55.001527   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:55.001527   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:55.001741   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:02:57.479140   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:02:57.479549   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:57.484541   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:02:57.484981   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:02:57.485046   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:02:57.611158   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930177.606966171
	
	I0528 14:02:57.611158   14040 fix.go:216] guest clock: 1716930177.606966171
	I0528 14:02:57.611158   14040 fix.go:229] Guest: 2024-05-28 14:02:57.606966171 -0700 PDT Remote: 2024-05-28 14:02:52.8989511 -0700 PDT m=+131.776057001 (delta=4.708015071s)
	I0528 14:02:57.611158   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:02:59.693923   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:02:59.694054   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:02:59.694054   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:02.296993   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:02.296993   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:02.305067   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:03:02.305973   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.216.124 22 <nil> <nil>}
	I0528 14:03:02.305973   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930177
	I0528 14:03:02.444170   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:02:57 UTC 2024
	
	I0528 14:03:02.444170   14040 fix.go:236] clock set: Tue May 28 21:02:57 UTC 2024
	 (err=<nil>)
	I0528 14:03:02.444170   14040 start.go:83] releasing machines lock for "ha-941700", held for 2m15.7081975s
	I0528 14:03:02.444518   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:04.612023   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:04.612417   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:04.612417   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:07.182187   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:07.182187   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:07.187481   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:03:07.187626   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:07.197431   14040 ssh_runner.go:195] Run: cat /version.json
	I0528 14:03:07.197431   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:09.462511   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:03:09.463173   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:09.463173   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:03:12.092133   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:12.092133   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:12.092133   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:03:12.137789   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:03:12.137789   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:03:12.137789   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:03:12.243414   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0559227s)
	I0528 14:03:12.243582   14040 ssh_runner.go:235] Completed: cat /version.json: (5.046141s)
	I0528 14:03:12.256208   14040 ssh_runner.go:195] Run: systemctl --version
	I0528 14:03:12.276960   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 14:03:12.285768   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:03:12.298393   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:03:12.325784   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:03:12.325784   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:03:12.325784   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:03:12.372789   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:03:12.408180   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:03:12.429851   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:03:12.442364   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:03:12.475413   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:03:12.512039   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:03:12.548169   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:03:12.580758   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:03:12.612164   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:03:12.643175   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:03:12.673752   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:03:12.703758   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:03:12.732750   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:03:12.763769   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:12.971815   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:03:13.004031   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:03:13.016618   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:03:13.055523   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:03:13.094094   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:03:13.135174   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:03:13.170627   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:03:13.207218   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:03:13.269076   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:03:13.294336   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:03:13.341736   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:03:13.363058   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:03:13.384564   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:03:13.429678   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:03:13.624925   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:03:13.800851   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:03:13.801018   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:03:13.850564   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:14.047998   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:03:16.583666   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5356299s)
	I0528 14:03:16.598143   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:03:16.643710   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:03:16.685641   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:03:16.879615   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:03:17.079329   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:17.284829   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:03:17.325787   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:03:17.367941   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:17.574418   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:03:17.686861   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:03:17.699673   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:03:17.709690   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:03:17.721680   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:03:17.741181   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:03:17.799611   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:03:17.808594   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:03:17.853206   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:03:17.890736   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:03:17.890736   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:03:17.895426   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:03:17.898425   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:03:17.898425   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:03:17.910426   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:03:17.917657   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:03:17.954346   14040 kubeadm.go:877] updating cluster {Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 14:03:17.954346   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:03:17.963004   14040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 14:03:17.994085   14040 docker.go:685] Got preloaded images: 
	I0528 14:03:17.994114   14040 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0528 14:03:18.007112   14040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 14:03:18.040821   14040 ssh_runner.go:195] Run: which lz4
	I0528 14:03:18.047012   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0528 14:03:18.060079   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 14:03:18.067106   14040 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 14:03:18.067295   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0528 14:03:20.197599   14040 docker.go:649] duration metric: took 2.1503198s to copy over tarball
	I0528 14:03:20.210728   14040 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 14:03:28.720781   14040 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.509954s)
	I0528 14:03:28.720847   14040 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 14:03:28.790317   14040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 14:03:28.807803   14040 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0528 14:03:28.853178   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:29.069485   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:03:32.094179   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0246872s)
	I0528 14:03:32.106990   14040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 14:03:32.129066   14040 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 14:03:32.129066   14040 cache_images.go:84] Images are preloaded, skipping loading
	I0528 14:03:32.129066   14040 kubeadm.go:928] updating node { 172.18.216.124 8443 v1.30.1 docker true true} ...
	I0528 14:03:32.129066   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.216.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:03:32.139182   14040 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 14:03:32.172613   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:03:32.172691   14040 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 14:03:32.172691   14040 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 14:03:32.172820   14040 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.216.124 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-941700 NodeName:ha-941700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.216.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.216.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 14:03:32.172925   14040 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.216.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-941700"
	  kubeletExtraArgs:
	    node-ip: 172.18.216.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.216.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 14:03:32.172925   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:03:32.185270   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:03:32.213670   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:03:32.214500   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:03:32.226695   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:03:32.242223   14040 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 14:03:32.256710   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 14:03:32.276179   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0528 14:03:32.310564   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:03:32.343610   14040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 14:03:32.377724   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0528 14:03:32.431016   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:03:32.438491   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:03:32.473447   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:03:32.676515   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:03:32.706000   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.216.124
	I0528 14:03:32.706000   14040 certs.go:194] generating shared ca certs ...
	I0528 14:03:32.706000   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.706706   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:03:32.706706   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:03:32.707437   14040 certs.go:256] generating profile certs ...
	I0528 14:03:32.708184   14040 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:03:32.708349   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt with IP's: []
	I0528 14:03:32.870551   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt ...
	I0528 14:03:32.870551   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.crt: {Name:mkd6bfba6f9e22f0dca5f732668ee3226d4be5b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.872767   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key ...
	I0528 14:03:32.872767   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key: {Name:mkc21a04343ad6467dbed833ebeb4d5c00ed91f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:32.874160   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533
	I0528 14:03:32.874160   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.223.254]
	I0528 14:03:33.010966   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 ...
	I0528 14:03:33.010966   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533: {Name:mk77fe2a64ed855fe669ad55a966fc5e5a5a715f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.011967   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533 ...
	I0528 14:03:33.011967   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533: {Name:mk2bd2e3ecda0b12866e7a4e1637306ac0f10a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.012758   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.bc149533 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:03:33.024793   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.bc149533 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:03:33.025665   14040 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:03:33.026738   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt with IP's: []
	I0528 14:03:33.222783   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt ...
	I0528 14:03:33.222783   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt: {Name:mkaec3a95c5cae384c19f826f464eead18aa4ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.224742   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key ...
	I0528 14:03:33.224742   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key: {Name:mkf7c1f4a975cb393ef78c9197c52017ef37c2bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:03:33.225724   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:03:33.226204   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:03:33.226456   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:03:33.226643   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:03:33.226778   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:03:33.226929   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:03:33.227078   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:03:33.235287   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:03:33.236287   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:03:33.237016   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:03:33.237103   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:03:33.237421   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:03:33.237929   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:03:33.238146   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:03:33.238649   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:03:33.239094   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.239325   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:33.239581   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.239791   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:03:33.291249   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:03:33.335192   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:03:33.394471   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:03:33.438098   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 14:03:33.481886   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:03:33.523904   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:03:33.567035   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:03:33.600677   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:03:33.636018   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:03:33.679129   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:03:33.720581   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 14:03:33.763385   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:03:33.785005   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:03:33.817774   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.826748   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.838002   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:03:33.860084   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:03:33.892480   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:03:33.925706   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.932708   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.945459   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:03:33.967049   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:03:33.996070   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:03:34.026843   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.034004   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.045774   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:03:34.069462   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:03:34.103274   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:03:34.109206   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:03:34.109206   14040 kubeadm.go:391] StartCluster: {Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clu
sterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:03:34.117749   14040 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 14:03:34.155043   14040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 14:03:34.195800   14040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 14:03:34.228742   14040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 14:03:34.252068   14040 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 14:03:34.252068   14040 kubeadm.go:156] found existing configuration files:
	
	I0528 14:03:34.268296   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 14:03:34.291951   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 14:03:34.307534   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 14:03:34.337065   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 14:03:34.357128   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 14:03:34.369326   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 14:03:34.401032   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 14:03:34.418174   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 14:03:34.432989   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 14:03:34.462724   14040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 14:03:34.481776   14040 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 14:03:34.493669   14040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 14:03:34.512075   14040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 14:03:34.920263   14040 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 14:03:49.028042   14040 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 14:03:49.028102   14040 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 14:03:49.028374   14040 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 14:03:49.028662   14040 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 14:03:49.028833   14040 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 14:03:49.029081   14040 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 14:03:49.032162   14040 out.go:204]   - Generating certificates and keys ...
	I0528 14:03:49.032530   14040 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 14:03:49.032628   14040 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 14:03:49.033117   14040 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 14:03:49.033271   14040 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 14:03:49.033851   14040 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-941700 localhost] and IPs [172.18.216.124 127.0.0.1 ::1]
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-941700 localhost] and IPs [172.18.216.124 127.0.0.1 ::1]
	I0528 14:03:49.033881   14040 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 14:03:49.034589   14040 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 14:03:49.034745   14040 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 14:03:49.034799   14040 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 14:03:49.035396   14040 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 14:03:49.035613   14040 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 14:03:49.035808   14040 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 14:03:49.035808   14040 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 14:03:49.038795   14040 out.go:204]   - Booting up control plane ...
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 14:03:49.039549   14040 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 14:03:49.040351   14040 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 14:03:49.040579   14040 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 14:03:49.040835   14040 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 14:03:49.040869   14040 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 14:03:49.041083   14040 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00150063s
	I0528 14:03:49.041251   14040 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 14:03:49.041518   14040 kubeadm.go:309] [api-check] The API server is healthy after 8.003115572s
	I0528 14:03:49.041707   14040 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 14:03:49.042061   14040 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 14:03:49.042061   14040 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 14:03:49.042481   14040 kubeadm.go:309] [mark-control-plane] Marking the node ha-941700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 14:03:49.042481   14040 kubeadm.go:309] [bootstrap-token] Using token: pykq7r.snsu11ybm3wlpgyb
	I0528 14:03:49.046616   14040 out.go:204]   - Configuring RBAC rules ...
	I0528 14:03:49.047523   14040 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 14:03:49.047768   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 14:03:49.048069   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 14:03:49.048380   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 14:03:49.048429   14040 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 14:03:49.048429   14040 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 14:03:49.048429   14040 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 14:03:49.048429   14040 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 14:03:49.048429   14040 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 14:03:49.048429   14040 kubeadm.go:309] 
	I0528 14:03:49.048429   14040 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 14:03:49.048429   14040 kubeadm.go:309] 
	I0528 14:03:49.049453   14040 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 14:03:49.049453   14040 kubeadm.go:309] 
	I0528 14:03:49.049635   14040 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 14:03:49.049811   14040 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 14:03:49.049924   14040 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 14:03:49.049924   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 14:03:49.050035   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 14:03:49.050035   14040 kubeadm.go:309] 
	I0528 14:03:49.050035   14040 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 14:03:49.050035   14040 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 14:03:49.050822   14040 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 14:03:49.050822   14040 kubeadm.go:309] 
	I0528 14:03:49.051090   14040 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 14:03:49.051337   14040 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 14:03:49.051417   14040 kubeadm.go:309] 
	I0528 14:03:49.051687   14040 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pykq7r.snsu11ybm3wlpgyb \
	I0528 14:03:49.052034   14040 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea \
	I0528 14:03:49.052157   14040 kubeadm.go:309] 	--control-plane 
	I0528 14:03:49.052157   14040 kubeadm.go:309] 
	I0528 14:03:49.052247   14040 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 14:03:49.052371   14040 kubeadm.go:309] 
	I0528 14:03:49.052624   14040 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pykq7r.snsu11ybm3wlpgyb \
	I0528 14:03:49.053055   14040 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 14:03:49.053099   14040 cni.go:84] Creating CNI manager for ""
	I0528 14:03:49.053099   14040 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 14:03:49.055210   14040 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 14:03:49.072907   14040 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 14:03:49.080197   14040 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 14:03:49.080197   14040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 14:03:49.127881   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 14:03:49.726245   14040 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 14:03:49.742480   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:49.742480   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700 minikube.k8s.io/updated_at=2024_05_28T14_03_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=true
	I0528 14:03:49.771127   14040 ops.go:34] apiserver oom_adj: -16
	I0528 14:03:49.980859   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:50.489017   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:50.991343   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:51.479624   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:51.985188   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:52.487761   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:52.988664   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:53.491610   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:53.995576   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:54.481920   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:54.985228   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:55.485682   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:55.991805   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:56.491742   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:56.980637   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:57.485868   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:57.988139   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:58.490622   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:58.993516   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:59.493373   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:03:59.979148   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:00.483408   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:00.989055   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:01.494943   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:01.984817   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:02.490629   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:02.994067   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 14:04:03.147615   14040 kubeadm.go:1107] duration metric: took 13.4213426s to wait for elevateKubeSystemPrivileges
	W0528 14:04:03.147762   14040 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 14:04:03.147762   14040 kubeadm.go:393] duration metric: took 29.038497s to StartCluster
	I0528 14:04:03.147762   14040 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:04:03.147762   14040 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:04:03.149375   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:04:03.150660   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 14:04:03.150800   14040 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:04:03.150800   14040 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 14:04:03.150866   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:04:03.150931   14040 addons.go:69] Setting storage-provisioner=true in profile "ha-941700"
	I0528 14:04:03.150931   14040 addons.go:69] Setting default-storageclass=true in profile "ha-941700"
	I0528 14:04:03.150931   14040 addons.go:234] Setting addon storage-provisioner=true in "ha-941700"
	I0528 14:04:03.151087   14040 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-941700"
	I0528 14:04:03.151087   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:04:03.151228   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:04:03.151228   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:03.152223   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:03.340405   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 14:04:03.873537   14040 start.go:946] {"host.minikube.internal": 172.18.208.1} host record injected into CoreDNS's ConfigMap
	I0528 14:04:05.444466   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:05.444466   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:05.445462   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:05.445462   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:05.448540   14040 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 14:04:05.446010   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:04:05.449448   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 14:04:05.451081   14040 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 14:04:05.451081   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 14:04:05.451081   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:05.452000   14040 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 14:04:05.452761   14040 addons.go:234] Setting addon default-storageclass=true in "ha-941700"
	I0528 14:04:05.452761   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:04:05.454123   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:07.775432   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:07.775432   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:07.776093   14040 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 14:04:07.776093   14040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 14:04:07.776160   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:04:07.847895   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:07.847895   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:07.848698   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:10.076562   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:10.076817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:10.076881   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:10.585304   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:04:10.585304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:10.585744   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:04:10.722181   14040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 14:04:12.759321   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:04:12.759321   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:12.759321   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:04:12.908859   14040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 14:04:13.106089   14040 round_trippers.go:463] GET https://172.18.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0528 14:04:13.106089   14040 round_trippers.go:469] Request Headers:
	I0528 14:04:13.106089   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:04:13.106089   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:04:13.121218   14040 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0528 14:04:13.124313   14040 round_trippers.go:463] PUT https://172.18.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 14:04:13.124313   14040 round_trippers.go:469] Request Headers:
	I0528 14:04:13.124313   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:04:13.124313   14040 round_trippers.go:473]     Content-Type: application/json
	I0528 14:04:13.124313   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:04:13.131143   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:04:13.135479   14040 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 14:04:13.139206   14040 addons.go:510] duration metric: took 9.988386s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 14:04:13.139206   14040 start.go:245] waiting for cluster config update ...
	I0528 14:04:13.139206   14040 start.go:254] writing updated cluster config ...
	I0528 14:04:13.142574   14040 out.go:177] 
	I0528 14:04:13.154943   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:04:13.154943   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:04:13.198350   14040 out.go:177] * Starting "ha-941700-m02" control-plane node in "ha-941700" cluster
	I0528 14:04:13.244741   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:04:13.244741   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:04:13.245591   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:04:13.245591   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:04:13.246222   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:04:13.248893   14040 start.go:360] acquireMachinesLock for ha-941700-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:04:13.248893   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700-m02"
	I0528 14:04:13.248893   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:04:13.248893   14040 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0528 14:04:13.253739   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:04:13.253739   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:04:13.253739   14040 client.go:168] LocalClient.Create starting
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:04:13.254427   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:04:13.255112   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:04:15.219282   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:04:15.220304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:15.220410   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:04:16.969907   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:04:16.969988   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:16.970072   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:04:18.474965   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:04:18.475155   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:18.475231   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:04:22.222808   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:04:22.222808   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:22.226295   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:04:22.749831   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:04:23.011464   14040 main.go:141] libmachine: Creating VM...
	I0528 14:04:23.011464   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:04:26.008454   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:04:26.009316   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:26.009435   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:04:26.009484   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:04:27.870335   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:04:27.870335   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:27.870335   14040 main.go:141] libmachine: Creating VHD
	I0528 14:04:27.870671   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:04:31.708369   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 76368530-AD82-4A71-9BC8-9335E626BFD1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:04:31.708369   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:31.708752   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:04:31.708752   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:04:31.724581   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:04:34.988806   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:34.989689   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:34.989689   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd' -SizeBytes 20000MB
	I0528 14:04:37.590422   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:37.590422   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:37.591187   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:04:41.351498   14040 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-941700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:04:41.352212   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:41.352212   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700-m02 -DynamicMemoryEnabled $false
	I0528 14:04:43.647841   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:43.647980   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:43.647980   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700-m02 -Count 2
	I0528 14:04:45.847083   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:45.847083   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:45.847744   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\boot2docker.iso'
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:48.537614   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\disk.vhd'
	I0528 14:04:51.281115   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:51.281793   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:51.281793   14040 main.go:141] libmachine: Starting VM...
	I0528 14:04:51.281793   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700-m02
	I0528 14:04:54.377812   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:54.378319   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:54.378319   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:04:54.378361   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:04:56.715445   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:04:56.715526   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:04:56.715585   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:04:59.302055   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:04:59.302055   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:00.308253   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:02.601133   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:02.601664   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:02.601801   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:05.233148   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:05.233148   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:06.245664   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:08.516786   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:08.516829   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:08.516900   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:11.125989   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:11.125989   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:12.142669   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:14.409616   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:14.409616   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:14.409715   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:17.010159   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:05:17.011261   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:18.016512   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:20.325142   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:20.326147   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:20.326214   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:22.921265   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:22.921594   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:22.921682   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:25.127077   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:25.127077   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:25.127077   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:05:25.128197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:27.323532   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:27.323676   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:27.323752   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:29.906228   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:29.906288   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:29.912560   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:29.922621   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:29.922621   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:05:30.041162   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:05:30.041359   14040 buildroot.go:166] provisioning hostname "ha-941700-m02"
	I0528 14:05:30.041359   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:32.230033   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:32.230163   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:32.230257   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:34.860480   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:34.860480   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:34.865842   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:34.866550   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:34.866550   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700-m02 && echo "ha-941700-m02" | sudo tee /etc/hostname
	I0528 14:05:35.023817   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700-m02
	
	I0528 14:05:35.023817   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:37.273067   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:37.273669   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:37.273741   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:39.902424   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:39.902424   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:39.908769   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:39.908769   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:39.909294   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:05:40.049715   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:05:40.049792   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:05:40.049792   14040 buildroot.go:174] setting up certificates
	I0528 14:05:40.049792   14040 provision.go:84] configureAuth start
	I0528 14:05:40.049792   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:42.218429   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:44.758292   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:46.933248   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:46.933869   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:46.933971   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:49.522186   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:49.522186   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:49.522186   14040 provision.go:143] copyHostCerts
	I0528 14:05:49.522831   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:05:49.522831   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:05:49.522831   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:05:49.523592   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:05:49.524896   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:05:49.525267   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:05:49.525336   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:05:49.525821   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:05:49.526850   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:05:49.527133   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:05:49.527133   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:05:49.527612   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:05:49.528394   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700-m02 san=[127.0.0.1 172.18.211.44 ha-941700-m02 localhost minikube]
	I0528 14:05:49.668030   14040 provision.go:177] copyRemoteCerts
	I0528 14:05:49.681674   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:05:49.681674   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:51.883949   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:51.884098   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:51.884098   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:54.504197   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:54.504197   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:54.504197   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:05:54.610293   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9285518s)
	I0528 14:05:54.610357   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:05:54.610509   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 14:05:54.657473   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:05:54.657473   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:05:54.701662   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:05:54.702735   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:05:54.756112   14040 provision.go:87] duration metric: took 14.7062886s to configureAuth
	I0528 14:05:54.756112   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:05:54.756650   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:05:54.756732   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:05:56.964041   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:05:56.964533   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:56.964533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:05:59.553819   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:05:59.553892   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:05:59.560033   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:05:59.560584   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:05:59.560665   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:05:59.684401   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:05:59.684401   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:05:59.685200   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:05:59.685200   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:01.854310   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:01.855039   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:01.855039   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:04.442417   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:04.442417   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:04.447434   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:04.448035   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:04.448237   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.216.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:06:04.599031   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.216.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:06:04.599031   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:06.782725   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:06.782809   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:06.782918   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:09.363720   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:09.364370   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:09.370648   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:09.371261   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:09.371337   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:06:11.503788   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:06:11.503788   14040 machine.go:97] duration metric: took 46.3755792s to provisionDockerMachine
	I0528 14:06:11.503788   14040 client.go:171] duration metric: took 1m58.2498015s to LocalClient.Create
	I0528 14:06:11.503788   14040 start.go:167] duration metric: took 1m58.2498015s to libmachine.API.Create "ha-941700"
	I0528 14:06:11.503788   14040 start.go:293] postStartSetup for "ha-941700-m02" (driver="hyperv")
	I0528 14:06:11.503788   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:06:11.520021   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:06:11.520021   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:13.740931   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:13.740992   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:13.741049   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:16.339365   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:16.340350   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:16.340620   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:16.452613   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9325816s)
	I0528 14:06:16.468378   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:06:16.478308   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:06:16.478308   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:06:16.478482   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:06:16.479863   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:06:16.479863   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:06:16.492285   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:06:16.513060   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:06:16.559235   14040 start.go:296] duration metric: took 5.0554363s for postStartSetup
	I0528 14:06:16.561613   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:18.758934   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:21.390284   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:21.390339   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:21.390556   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:06:21.393425   14040 start.go:128] duration metric: took 2m8.1441479s to createHost
	I0528 14:06:21.393520   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:23.584653   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:26.173809   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:26.173809   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:26.180395   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:26.181057   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:26.181057   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:06:26.304140   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930386.299865259
	
	I0528 14:06:26.304194   14040 fix.go:216] guest clock: 1716930386.299865259
	I0528 14:06:26.304194   14040 fix.go:229] Guest: 2024-05-28 14:06:26.299865259 -0700 PDT Remote: 2024-05-28 14:06:21.3934259 -0700 PDT m=+340.270103101 (delta=4.906439359s)
	I0528 14:06:26.304311   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:28.539975   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:31.205741   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:31.205993   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:31.212125   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:06:31.212125   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.211.44 22 <nil> <nil>}
	I0528 14:06:31.212125   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930386
	I0528 14:06:31.356411   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:06:26 UTC 2024
	
	I0528 14:06:31.356411   14040 fix.go:236] clock set: Tue May 28 21:06:26 UTC 2024
	 (err=<nil>)
	I0528 14:06:31.356411   14040 start.go:83] releasing machines lock for "ha-941700-m02", held for 2m18.1072283s
	I0528 14:06:31.356804   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:33.572048   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:33.572334   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:33.572423   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:36.215557   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:36.215557   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:36.219711   14040 out.go:177] * Found network options:
	I0528 14:06:36.223881   14040 out.go:177]   - NO_PROXY=172.18.216.124
	W0528 14:06:36.237996   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:06:36.240699   14040 out.go:177]   - NO_PROXY=172.18.216.124
	W0528 14:06:36.242973   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:06:36.244385   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:06:36.247281   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:06:36.247497   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:36.259768   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 14:06:36.259768   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m02 ).state
	I0528 14:06:38.504051   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:38.504051   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:38.504169   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:38.508127   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:38.508302   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:38.508302   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:41.278835   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:41.278835   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:41.279417   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:41.310521   14040 main.go:141] libmachine: [stdout =====>] : 172.18.211.44
	
	I0528 14:06:41.310521   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:41.311551   14040 sshutil.go:53] new ssh client: &{IP:172.18.211.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m02\id_rsa Username:docker}
	I0528 14:06:41.434021   14040 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1742421s)
	I0528 14:06:41.434807   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1873551s)
	W0528 14:06:41.434807   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:06:41.448176   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:06:41.479124   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:06:41.479270   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:06:41.479491   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:06:41.525716   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:06:41.558563   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:06:41.577565   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:06:41.589564   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:06:41.621578   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:06:41.653346   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:06:41.683776   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:06:41.717693   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:06:41.751482   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:06:41.784949   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:06:41.816902   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:06:41.850907   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:06:41.879899   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:06:41.909733   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:42.102165   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:06:42.134735   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:06:42.149369   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:06:42.185372   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:06:42.222313   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:06:42.271534   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:06:42.308159   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:06:42.343073   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:06:42.407970   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:06:42.434720   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:06:42.484664   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:06:42.506117   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:06:42.523571   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:06:42.572814   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:06:42.769548   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:06:42.958438   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:06:42.958599   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:06:43.004304   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:43.213900   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:06:45.756581   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5426761s)
	I0528 14:06:45.768354   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:06:45.806522   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:06:45.847219   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:06:46.064060   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:06:46.275801   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:46.477420   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:06:46.525469   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:06:46.565085   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:46.769708   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:06:46.883085   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:06:46.898387   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:06:46.907314   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:06:46.922316   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:06:46.941492   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:06:46.994967   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:06:47.003865   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:06:47.048150   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:06:47.082114   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:06:47.087838   14040 out.go:177]   - env NO_PROXY=172.18.216.124
	I0528 14:06:47.089541   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:06:47.094135   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:06:47.097599   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:06:47.097599   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:06:47.112315   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:06:47.119203   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:06:47.140686   14040 mustload.go:65] Loading cluster: ha-941700
	I0528 14:06:47.141317   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:06:47.142166   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:06:49.341576   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:49.341911   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:49.341911   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:06:49.342623   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.211.44
	I0528 14:06:49.342623   14040 certs.go:194] generating shared ca certs ...
	I0528 14:06:49.342716   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.342998   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:06:49.343749   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:06:49.343749   14040 certs.go:256] generating profile certs ...
	I0528 14:06:49.344883   14040 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:06:49.344987   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130
	I0528 14:06:49.345230   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.211.44 172.18.223.254]
	I0528 14:06:49.496634   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 ...
	I0528 14:06:49.496634   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130: {Name:mk3007809a45b4e22adf6f2b66085f72973b6137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.497964   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130 ...
	I0528 14:06:49.497964   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130: {Name:mke71e2405d24da228b21d1fd240c74c98071bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:06:49.498924   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.3416c130 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:06:49.513583   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.3416c130 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:06:49.514382   14040 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:06:49.514382   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:06:49.515013   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:06:49.515207   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:06:49.515356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:06:49.516355   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:06:49.516355   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:06:49.517356   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:06:49.517356   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:06:49.517356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:06:49.517356   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:49.518367   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:06:49.518367   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:51.783227   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:06:54.446486   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:06:54.446714   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:06:54.446893   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:06:54.542193   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 14:06:54.551643   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 14:06:54.592234   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 14:06:54.600755   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0528 14:06:54.633894   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 14:06:54.641414   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 14:06:54.672954   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 14:06:54.680833   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0528 14:06:54.714494   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 14:06:54.722067   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 14:06:54.762616   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 14:06:54.770420   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0528 14:06:54.796185   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:06:54.848335   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:06:54.904155   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:06:54.957932   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:06:55.003837   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 14:06:55.049357   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:06:55.100311   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:06:55.153656   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:06:55.201155   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:06:55.249320   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:06:55.297280   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:06:55.349829   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 14:06:55.384899   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0528 14:06:55.416859   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 14:06:55.450522   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0528 14:06:55.484373   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 14:06:55.518616   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0528 14:06:55.554599   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 14:06:55.599480   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:06:55.621213   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:06:55.654405   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.661632   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.673708   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:06:55.694837   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:06:55.726609   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:06:55.757127   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.764804   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.778174   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:06:55.798574   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:06:55.832906   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:06:55.866041   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.872458   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.885563   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:06:55.906352   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:06:55.942093   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:06:55.950745   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:06:55.951053   14040 kubeadm.go:928] updating node {m02 172.18.211.44 8443 v1.30.1 docker true true} ...
	I0528 14:06:55.951241   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.211.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:06:55.951323   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:06:55.963722   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:06:55.991401   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:06:55.991616   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:06:56.005091   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:06:56.021925   14040 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 14:06:56.037314   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl
	I0528 14:06:56.061834   14040 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm
	I0528 14:06:57.190502   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:06:57.202477   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:06:57.210453   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 14:06:57.210770   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 14:06:57.238048   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:06:57.249957   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:06:57.322511   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 14:06:57.322789   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 14:06:57.604099   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:06:57.689301   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:06:57.702202   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:06:57.724091   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 14:06:57.724743   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 14:06:58.711363   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 14:06:58.730386   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 14:06:58.763792   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:06:58.795712   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 14:06:58.840258   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:06:58.847746   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:06:58.887458   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:06:59.087194   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:06:59.120443   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:06:59.121859   14040 start.go:316] joinCluster: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:06:59.121859   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 14:06:59.121859   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:07:01.291653   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:07:01.292114   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:07:01.292114   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:07:03.967064   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:07:03.967740   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:07:03.967740   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:07:04.171439   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0495692s)
	I0528 14:07:04.171539   14040 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:07:04.171689   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pjbq9b.3kg1wcqsonrr0ls0 --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m02 --control-plane --apiserver-advertise-address=172.18.211.44 --apiserver-bind-port=8443"
	I0528 14:07:49.251617   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pjbq9b.3kg1wcqsonrr0ls0 --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m02 --control-plane --apiserver-advertise-address=172.18.211.44 --apiserver-bind-port=8443": (45.0796647s)
	I0528 14:07:49.251699   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 14:07:50.167917   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700-m02 minikube.k8s.io/updated_at=2024_05_28T14_07_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=false
	I0528 14:07:50.358582   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-941700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 14:07:50.540943   14040 start.go:318] duration metric: took 51.4189125s to joinCluster
	I0528 14:07:50.541104   14040 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:07:50.544511   14040 out.go:177] * Verifying Kubernetes components...
	I0528 14:07:50.541966   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:07:50.559366   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:07:50.986707   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:07:51.024614   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:07:51.024614   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 14:07:51.024614   14040 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.223.254:8443 with https://172.18.216.124:8443
	I0528 14:07:51.024614   14040 node_ready.go:35] waiting up to 6m0s for node "ha-941700-m02" to be "Ready" ...
	I0528 14:07:51.024614   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:51.024614   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:51.024614   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:51.024614   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:51.050927   14040 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0528 14:07:51.540393   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:51.540393   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:51.540393   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:51.540393   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:51.547288   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:07:52.033001   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:52.033116   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:52.033116   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:52.033116   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:52.038523   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:52.538048   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:52.538106   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:52.538106   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:52.538106   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:52.543866   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:53.029240   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:53.029240   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:53.029240   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:53.029240   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:53.369081   14040 round_trippers.go:574] Response Status: 200 OK in 339 milliseconds
	I0528 14:07:53.370439   14040 node_ready.go:53] node "ha-941700-m02" has status "Ready":"False"
	I0528 14:07:53.536687   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:53.536687   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:53.536687   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:53.536687   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:53.542317   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:54.028702   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:54.028934   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:54.028934   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:54.028934   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:54.034290   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:54.529251   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:54.529363   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:54.529363   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:54.529363   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:54.534072   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:55.035419   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:55.035419   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:55.035419   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:55.035419   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:55.057174   14040 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0528 14:07:55.530153   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:55.530153   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:55.530153   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:55.530153   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:55.534837   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:55.535805   14040 node_ready.go:53] node "ha-941700-m02" has status "Ready":"False"
	I0528 14:07:56.040731   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:56.040731   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:56.040731   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:56.040731   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:56.045600   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:56.535680   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:56.535680   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:56.535680   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:56.535680   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:56.541251   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.025259   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.025473   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.025473   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.025473   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.061115   14040 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0528 14:07:57.539429   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.539429   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.539429   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.539429   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.545022   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.547393   14040 node_ready.go:49] node "ha-941700-m02" has status "Ready":"True"
	I0528 14:07:57.547393   14040 node_ready.go:38] duration metric: took 6.5227651s for node "ha-941700-m02" to be "Ready" ...
	I0528 14:07:57.547393   14040 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:07:57.547393   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:07:57.547393   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.547393   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.547393   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.557212   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:07:57.568427   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.569341   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8prdw
	I0528 14:07:57.569341   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.569341   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.569341   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.583201   14040 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0528 14:07:57.584050   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.584050   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.584050   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.584050   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.593043   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:07:57.595245   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.595245   14040 pod_ready.go:81] duration metric: took 26.8173ms for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.595245   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.595245   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kmzqb
	I0528 14:07:57.595245   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.595245   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.595245   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.599012   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:57.600181   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.600181   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.600181   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.600181   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.604012   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:57.605096   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.605096   14040 pod_ready.go:81] duration metric: took 9.8507ms for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.605096   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.605096   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700
	I0528 14:07:57.605096   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.605096   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.605096   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.610062   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:57.610915   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:07:57.610915   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.610915   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.610915   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.616515   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:57.617842   14040 pod_ready.go:92] pod "etcd-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:07:57.617842   14040 pod_ready.go:81] duration metric: took 12.7464ms for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.617842   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:07:57.618531   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:57.618531   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.618531   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.618531   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.623125   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:57.624584   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:57.624584   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:57.624670   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:57.624670   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:57.627938   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:07:58.132295   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:58.132295   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.132295   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.132295   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.138451   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:07:58.142744   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:58.142875   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.142875   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.142875   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.147745   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:58.618589   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:58.618830   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.618830   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.618830   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.624556   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:58.624978   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:58.624978   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:58.624978   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:58.624978   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:58.632122   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:07:59.121182   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:59.121182   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.121182   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.121182   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.126056   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.127637   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:59.127637   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.127637   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.127637   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.131756   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.630528   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:07:59.630528   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.630528   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.630528   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.636172   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:07:59.637557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:07:59.637557   14040 round_trippers.go:469] Request Headers:
	I0528 14:07:59.637557   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:07:59.637633   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:07:59.641873   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:07:59.642769   14040 pod_ready.go:102] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 14:08:00.129469   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:00.129469   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.129469   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.129469   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.135895   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:00.137296   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:00.137296   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.137296   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.137296   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.141657   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:00.630213   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:00.630441   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.630441   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.630441   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.638868   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:08:00.640262   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:00.640378   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:00.640378   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:00.640378   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:00.644274   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.129970   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:08:01.129970   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.129970   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.129970   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.135616   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.136779   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.136779   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.136779   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.136779   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.140055   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.141490   14040 pod_ready.go:92] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:01.141490   14040 pod_ready.go:81] duration metric: took 3.5236405s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.141490   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.141490   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:08:01.141490   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.141490   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.141490   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.145867   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:01.146681   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:01.146681   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.146681   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.146681   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.150366   14040 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 14:08:01.151104   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:01.151181   14040 pod_ready.go:81] duration metric: took 9.6908ms for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.151181   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:01.151264   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:01.151306   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.151342   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.151342   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.156690   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.158014   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.158014   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.158014   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.158014   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.162638   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:01.658616   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:01.658616   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.658707   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.658707   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.664004   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:01.665794   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:01.665794   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:01.665794   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:01.665794   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:01.669833   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.156036   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:02.156036   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.156148   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.156148   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.160530   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.162448   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:02.162572   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.162572   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.162629   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.168506   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:02.659474   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:08:02.659537   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.659537   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.659537   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.679423   14040 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0528 14:08:02.681487   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:02.681602   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.681602   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.681602   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.685859   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.687209   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:02.687266   14040 pod_ready.go:81] duration metric: took 1.5360822s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.687266   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.687380   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:08:02.687436   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.687436   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.687476   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.692184   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:02.751991   14040 request.go:629] Waited for 58.7585ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:02.752205   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:02.752244   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.752244   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.752280   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.757409   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:02.759134   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:02.759134   14040 pod_ready.go:81] duration metric: took 71.8673ms for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.759193   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:02.954469   14040 request.go:629] Waited for 194.981ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:02.954557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:02.954557   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:02.954675   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:02.954675   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:02.961489   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:03.142381   14040 request.go:629] Waited for 179.6949ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.142660   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.142660   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.142660   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.142660   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.148764   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.345238   14040 request.go:629] Waited for 78.0303ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.345238   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.345365   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.345365   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.345438   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.351952   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.549536   14040 request.go:629] Waited for 196.3033ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.549649   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.549867   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.549867   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.549968   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.556252   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:03.767102   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:03.767102   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.767102   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.767102   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.772696   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:03.940326   14040 request.go:629] Waited for 166.2176ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.940579   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:03.940579   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:03.940579   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:03.940668   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:03.948952   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:08:04.267301   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:08:04.267301   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.267301   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.267301   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.272149   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.346569   14040 request.go:629] Waited for 72.9879ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:04.346660   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:04.346730   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.346730   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.346730   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.351310   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.353347   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:04.353347   14040 pod_ready.go:81] duration metric: took 1.5941501s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.353430   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.548044   14040 request.go:629] Waited for 194.5456ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:08:04.548317   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:08:04.548317   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.548317   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.548317   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.553209   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:04.750320   14040 request.go:629] Waited for 195.3397ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:04.750480   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:04.750480   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.750480   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.750480   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.756315   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:04.757544   14040 pod_ready.go:92] pod "kube-proxy-hrwn5" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:04.757637   14040 pod_ready.go:81] duration metric: took 404.2056ms for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.757637   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:04.952876   14040 request.go:629] Waited for 194.9362ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:08:04.952876   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:08:04.953154   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:04.953154   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:04.953154   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:04.958350   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.141290   14040 request.go:629] Waited for 180.98ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.141459   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.141513   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.141531   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.141531   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.147119   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.147842   14040 pod_ready.go:92] pod "kube-proxy-spzwh" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.147842   14040 pod_ready.go:81] duration metric: took 390.2043ms for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.147842   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.344273   14040 request.go:629] Waited for 196.252ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:08:05.344962   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:08:05.345115   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.345115   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.345473   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.350567   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.547560   14040 request.go:629] Waited for 196.2273ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:05.547867   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:08:05.547867   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.547867   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.547867   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.557084   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:08:05.558505   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.558505   14040 pod_ready.go:81] duration metric: took 410.662ms for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.558573   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.754064   14040 request.go:629] Waited for 195.1988ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:08:05.754284   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:08:05.754284   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.754357   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.754389   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.759787   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:08:05.944025   14040 request.go:629] Waited for 183.367ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.944227   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:08:05.944227   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:05.944227   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:05.944227   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:05.948822   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:05.949974   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:08:05.949974   14040 pod_ready.go:81] duration metric: took 391.4009ms for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:08:05.949974   14040 pod_ready.go:38] duration metric: took 8.4025639s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:08:05.950156   14040 api_server.go:52] waiting for apiserver process to appear ...
	I0528 14:08:05.965510   14040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 14:08:05.998177   14040 api_server.go:72] duration metric: took 15.4569891s to wait for apiserver process to appear ...
	I0528 14:08:05.998277   14040 api_server.go:88] waiting for apiserver healthz status ...
	I0528 14:08:05.998373   14040 api_server.go:253] Checking apiserver healthz at https://172.18.216.124:8443/healthz ...
	I0528 14:08:06.005428   14040 api_server.go:279] https://172.18.216.124:8443/healthz returned 200:
	ok
	I0528 14:08:06.005428   14040 round_trippers.go:463] GET https://172.18.216.124:8443/version
	I0528 14:08:06.005428   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.005428   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.005428   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.007415   14040 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 14:08:06.007415   14040 api_server.go:141] control plane version: v1.30.1
	I0528 14:08:06.007415   14040 api_server.go:131] duration metric: took 9.138ms to wait for apiserver health ...
	I0528 14:08:06.007415   14040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 14:08:06.149594   14040 request.go:629] Waited for 141.9687ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.149750   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.149750   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.149750   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.149750   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.160589   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:08:06.166571   14040 system_pods.go:59] 17 kube-system pods found
	I0528 14:08:06.166571   14040 system_pods.go:61] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:08:06.166571   14040 system_pods.go:61] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:08:06.167584   14040 system_pods.go:61] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:08:06.167584   14040 system_pods.go:74] duration metric: took 160.1685ms to wait for pod list to return data ...
	I0528 14:08:06.167584   14040 default_sa.go:34] waiting for default service account to be created ...
	I0528 14:08:06.354053   14040 request.go:629] Waited for 186.1466ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:08:06.354230   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:08:06.354302   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.354326   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.354326   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.359086   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:08:06.360502   14040 default_sa.go:45] found service account: "default"
	I0528 14:08:06.360502   14040 default_sa.go:55] duration metric: took 192.9175ms for default service account to be created ...
	I0528 14:08:06.360502   14040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 14:08:06.541490   14040 request.go:629] Waited for 180.8815ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.541795   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:08:06.541795   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.541795   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.541795   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.552348   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:08:06.560031   14040 system_pods.go:86] 17 kube-system pods found
	I0528 14:08:06.560080   14040 system_pods.go:89] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:08:06.560156   14040 system_pods.go:89] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:08:06.560205   14040 system_pods.go:89] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:08:06.560273   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:08:06.560337   14040 system_pods.go:89] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:08:06.560391   14040 system_pods.go:89] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:08:06.560450   14040 system_pods.go:89] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:08:06.560450   14040 system_pods.go:89] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:08:06.560450   14040 system_pods.go:126] duration metric: took 199.9481ms to wait for k8s-apps to be running ...
	I0528 14:08:06.560450   14040 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 14:08:06.572807   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:08:06.597559   14040 system_svc.go:56] duration metric: took 37.1082ms WaitForService to wait for kubelet
	I0528 14:08:06.597621   14040 kubeadm.go:576] duration metric: took 16.0564319s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:08:06.597621   14040 node_conditions.go:102] verifying NodePressure condition ...
	I0528 14:08:06.745944   14040 request.go:629] Waited for 148.1484ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes
	I0528 14:08:06.746169   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes
	I0528 14:08:06.746169   14040 round_trippers.go:469] Request Headers:
	I0528 14:08:06.746207   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:08:06.746207   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:08:06.752978   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:08:06.754989   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:08:06.755043   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:08:06.755113   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:08:06.755113   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:08:06.755113   14040 node_conditions.go:105] duration metric: took 157.4915ms to run NodePressure ...
	I0528 14:08:06.755113   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:08:06.755169   14040 start.go:254] writing updated cluster config ...
	I0528 14:08:06.758155   14040 out.go:177] 
	I0528 14:08:06.774211   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:08:06.774211   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:08:06.780249   14040 out.go:177] * Starting "ha-941700-m03" control-plane node in "ha-941700" cluster
	I0528 14:08:06.783853   14040 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 14:08:06.783853   14040 cache.go:56] Caching tarball of preloaded images
	I0528 14:08:06.783853   14040 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 14:08:06.783853   14040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 14:08:06.783853   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:08:06.790874   14040 start.go:360] acquireMachinesLock for ha-941700-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 14:08:06.790874   14040 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-941700-m03"
	I0528 14:08:06.790874   14040 start.go:93] Provisioning new machine with config: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:08:06.791862   14040 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0528 14:08:06.795867   14040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 14:08:06.795867   14040 start.go:159] libmachine.API.Create for "ha-941700" (driver="hyperv")
	I0528 14:08:06.795867   14040 client.go:168] LocalClient.Create starting
	I0528 14:08:06.795867   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Decoding PEM data...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: Parsing certificate...
	I0528 14:08:06.796849   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 14:08:08.775934   14040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 14:08:08.775934   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:08.776075   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 14:08:10.558059   14040 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 14:08:10.558126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:10.558189   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:08:12.083225   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:08:12.083850   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:12.083986   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:08:15.968955   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:08:15.968955   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:15.972532   14040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 14:08:16.422039   14040 main.go:141] libmachine: Creating SSH key...
	I0528 14:08:16.557645   14040 main.go:141] libmachine: Creating VM...
	I0528 14:08:16.557645   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:19.572420   14040 main.go:141] libmachine: Using switch "Default Switch"
	I0528 14:08:19.572420   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 14:08:21.373627   14040 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 14:08:21.373627   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:21.373718   14040 main.go:141] libmachine: Creating VHD
	I0528 14:08:21.374197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 14:08:25.281700   14040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4D7299B5-C95C-4A8E-BF76-6C2E26C8B9BA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 14:08:25.281946   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:25.281946   14040 main.go:141] libmachine: Writing magic tar header
	I0528 14:08:25.282031   14040 main.go:141] libmachine: Writing SSH key tar header
	I0528 14:08:25.291039   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 14:08:28.562534   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:28.562986   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:28.563050   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd' -SizeBytes 20000MB
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:31.186150   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 14:08:34.925523   14040 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-941700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 14:08:34.925523   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:34.925966   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-941700-m03 -DynamicMemoryEnabled $false
	I0528 14:08:37.292940   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:37.292940   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:37.293710   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-941700-m03 -Count 2
	I0528 14:08:39.566466   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:39.566466   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:39.566597   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\boot2docker.iso'
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:42.240333   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-941700-m03 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\disk.vhd'
	I0528 14:08:45.025713   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:45.026533   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:45.026533   14040 main.go:141] libmachine: Starting VM...
	I0528 14:08:45.026533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-941700-m03
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:48.273400   14040 main.go:141] libmachine: Waiting for host to start...
	I0528 14:08:48.273400   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:08:50.713415   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:08:50.714229   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:50.714308   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:08:53.374692   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:53.375370   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:54.388425   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:08:56.713724   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:08:56.713724   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:08:56.713932   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:08:59.351788   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:08:59.351788   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:00.366165   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:02.670953   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:02.670953   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:02.672062   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:05.277941   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:09:05.277941   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:06.292842   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:08.593963   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:08.593963   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:08.594197   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:11.261357   14040 main.go:141] libmachine: [stdout =====>] : 
	I0528 14:09:11.262143   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:12.273777   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:14.612984   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:17.320817   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:17.320817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:17.321719   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:19.564811   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:19.565697   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:19.565697   14040 machine.go:94] provisionDockerMachine start ...
	I0528 14:09:19.565844   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:21.842647   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:21.843075   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:21.843164   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:24.496551   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:24.496617   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:24.502487   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:24.513195   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:24.513195   14040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 14:09:24.635228   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 14:09:24.635286   14040 buildroot.go:166] provisioning hostname "ha-941700-m03"
	I0528 14:09:24.635344   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:26.841642   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:29.450461   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:29.451124   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:29.458297   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:29.458978   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:29.458978   14040 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-941700-m03 && echo "ha-941700-m03" | sudo tee /etc/hostname
	I0528 14:09:29.605025   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-941700-m03
	
	I0528 14:09:29.605131   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:31.833133   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:31.833817   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:31.834068   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:34.511029   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:34.511845   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:34.518738   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:34.519494   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:34.519494   14040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-941700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-941700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-941700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 14:09:34.665631   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 14:09:34.665631   14040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 14:09:34.665631   14040 buildroot.go:174] setting up certificates
	I0528 14:09:34.665631   14040 provision.go:84] configureAuth start
	I0528 14:09:34.665631   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:36.918745   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:39.591203   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:41.819579   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:41.820409   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:41.820409   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:44.478304   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:44.478304   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:44.478392   14040 provision.go:143] copyHostCerts
	I0528 14:09:44.478523   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 14:09:44.478523   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 14:09:44.478523   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 14:09:44.479215   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 14:09:44.480715   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 14:09:44.481066   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 14:09:44.481066   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 14:09:44.481580   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 14:09:44.481750   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 14:09:44.481750   14040 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 14:09:44.481750   14040 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 14:09:44.481750   14040 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 14:09:44.484146   14040 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-941700-m03 san=[127.0.0.1 172.18.222.186 ha-941700-m03 localhost minikube]
	I0528 14:09:44.681391   14040 provision.go:177] copyRemoteCerts
	I0528 14:09:44.698055   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 14:09:44.698055   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:46.957204   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:46.957204   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:46.957296   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:49.613112   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:49.613112   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:49.613112   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:09:49.708912   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0107798s)
	I0528 14:09:49.708912   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 14:09:49.708912   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 14:09:49.753651   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 14:09:49.754169   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 14:09:49.800940   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 14:09:49.801567   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 14:09:49.847832   14040 provision.go:87] duration metric: took 15.182169s to configureAuth
	I0528 14:09:49.847832   14040 buildroot.go:189] setting minikube options for container-runtime
	I0528 14:09:49.848454   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:09:49.848608   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:52.071802   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:52.072196   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:52.072196   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:54.737842   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:54.738761   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:54.744441   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:54.745110   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:54.745110   14040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 14:09:54.871652   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 14:09:54.871785   14040 buildroot.go:70] root file system type: tmpfs
	I0528 14:09:54.871922   14040 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 14:09:54.871922   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:09:57.084646   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:09:57.084783   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:57.084917   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:09:59.699711   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:09:59.699711   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:09:59.705132   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:09:59.705852   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:09:59.705852   14040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.216.124"
	Environment="NO_PROXY=172.18.216.124,172.18.211.44"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 14:09:59.864507   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.216.124
	Environment=NO_PROXY=172.18.216.124,172.18.211.44
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 14:09:59.864624   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:02.071722   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:02.072230   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:02.072230   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:04.665133   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:04.665133   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:04.671844   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:04.672556   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:04.672556   14040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 14:10:06.815575   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 14:10:06.815575   14040 machine.go:97] duration metric: took 47.2497098s to provisionDockerMachine
	I0528 14:10:06.815575   14040 client.go:171] duration metric: took 2m0.0194558s to LocalClient.Create
	I0528 14:10:06.815575   14040 start.go:167] duration metric: took 2m0.0194558s to libmachine.API.Create "ha-941700"
	I0528 14:10:06.815575   14040 start.go:293] postStartSetup for "ha-941700-m03" (driver="hyperv")
	I0528 14:10:06.815575   14040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 14:10:06.829533   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 14:10:06.829533   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:09.081619   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:09.082083   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:09.082206   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:11.758312   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:11.758312   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:11.758625   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:11.867169   14040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.037506s)
	I0528 14:10:11.881141   14040 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 14:10:11.888258   14040 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 14:10:11.888258   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 14:10:11.888917   14040 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 14:10:11.889441   14040 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 14:10:11.889441   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 14:10:11.905352   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 14:10:11.931212   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 14:10:11.982506   14040 start.go:296] duration metric: took 5.1669203s for postStartSetup
	I0528 14:10:11.985169   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:14.245028   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:14.245028   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:14.245909   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:16.941961   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:16.941961   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:16.942493   14040 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\config.json ...
	I0528 14:10:16.944851   14040 start.go:128] duration metric: took 2m10.1527163s to createHost
	I0528 14:10:16.944851   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:19.210392   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:19.210392   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:19.210820   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:21.920781   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:21.921528   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:21.927844   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:21.928917   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:21.928917   14040 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 14:10:22.053349   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930622.053839995
	
	I0528 14:10:22.053349   14040 fix.go:216] guest clock: 1716930622.053839995
	I0528 14:10:22.053349   14040 fix.go:229] Guest: 2024-05-28 14:10:22.053839995 -0700 PDT Remote: 2024-05-28 14:10:16.9448519 -0700 PDT m=+575.821034401 (delta=5.108988095s)
	I0528 14:10:22.053349   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:24.278665   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:26.945126   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:26.945126   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:26.950981   14040 main.go:141] libmachine: Using SSH client type: native
	I0528 14:10:26.951679   14040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.222.186 22 <nil> <nil>}
	I0528 14:10:26.951679   14040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716930622
	I0528 14:10:27.084373   14040 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 21:10:22 UTC 2024
	
	I0528 14:10:27.084373   14040 fix.go:236] clock set: Tue May 28 21:10:22 UTC 2024
	 (err=<nil>)
	I0528 14:10:27.084373   14040 start.go:83] releasing machines lock for "ha-941700-m03", held for 2m20.2932049s
	I0528 14:10:27.084373   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:29.365819   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:29.366612   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:29.366612   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:32.095892   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:32.095973   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:32.101053   14040 out.go:177] * Found network options:
	I0528 14:10:32.103487   14040 out.go:177]   - NO_PROXY=172.18.216.124,172.18.211.44
	W0528 14:10:32.105577   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.105577   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:10:32.108126   14040 out.go:177]   - NO_PROXY=172.18.216.124,172.18.211.44
	W0528 14:10:32.110565   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.110565   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.111945   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 14:10:32.111945   14040 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 14:10:32.113891   14040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 14:10:32.113891   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:32.127132   14040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 14:10:32.127396   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700-m03 ).state
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:34.444629   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:34.450254   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:34.451140   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:34.451140   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700-m03 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:37.225752   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:37.225752   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:37.226316   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:37.258268   14040 main.go:141] libmachine: [stdout =====>] : 172.18.222.186
	
	I0528 14:10:37.258730   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:37.258943   14040 sshutil.go:53] new ssh client: &{IP:172.18.222.186 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700-m03\id_rsa Username:docker}
	I0528 14:10:37.409354   14040 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2819474s)
	I0528 14:10:37.410260   14040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2963579s)
	W0528 14:10:37.410260   14040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 14:10:37.422277   14040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 14:10:37.457653   14040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 14:10:37.457653   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:10:37.457929   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:10:37.511284   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 14:10:37.543988   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 14:10:37.566052   14040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 14:10:37.578646   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 14:10:37.611915   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:10:37.647371   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 14:10:37.684800   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 14:10:37.719833   14040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 14:10:37.754848   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 14:10:37.795163   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 14:10:37.833885   14040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 14:10:37.866529   14040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 14:10:37.897978   14040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 14:10:37.927248   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:38.139122   14040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 14:10:38.175619   14040 start.go:494] detecting cgroup driver to use...
	I0528 14:10:38.189425   14040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 14:10:38.228397   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:10:38.265393   14040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 14:10:38.329444   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 14:10:38.375834   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:10:38.414609   14040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 14:10:38.483289   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 14:10:38.510385   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 14:10:38.558266   14040 ssh_runner.go:195] Run: which cri-dockerd
	I0528 14:10:38.578103   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 14:10:38.597128   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 14:10:38.639782   14040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 14:10:38.846423   14040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 14:10:39.030530   14040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 14:10:39.030651   14040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 14:10:39.083703   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:39.285178   14040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 14:10:41.823618   14040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5384347s)
	I0528 14:10:41.836324   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 14:10:41.875517   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:10:41.911538   14040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 14:10:42.144709   14040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 14:10:42.372858   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:42.579693   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 14:10:42.623054   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 14:10:42.660193   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:42.870380   14040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 14:10:42.987952   14040 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 14:10:43.002467   14040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 14:10:43.012232   14040 start.go:562] Will wait 60s for crictl version
	I0528 14:10:43.024142   14040 ssh_runner.go:195] Run: which crictl
	I0528 14:10:43.043222   14040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 14:10:43.108797   14040 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 14:10:43.120059   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:10:43.163713   14040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 14:10:43.203812   14040 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 14:10:43.206241   14040 out.go:177]   - env NO_PROXY=172.18.216.124
	I0528 14:10:43.209076   14040 out.go:177]   - env NO_PROXY=172.18.216.124,172.18.211.44
	I0528 14:10:43.210715   14040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 14:10:43.216637   14040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 14:10:43.216756   14040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 14:10:43.216756   14040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 14:10:43.216827   14040 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 14:10:43.220198   14040 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 14:10:43.220198   14040 ip.go:210] interface addr: 172.18.208.1/20
	I0528 14:10:43.234025   14040 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 14:10:43.240713   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:10:43.270268   14040 mustload.go:65] Loading cluster: ha-941700
	I0528 14:10:43.270784   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:10:43.271676   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:45.480601   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:45.480728   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:45.480728   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:10:45.481418   14040 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700 for IP: 172.18.222.186
	I0528 14:10:45.481418   14040 certs.go:194] generating shared ca certs ...
	I0528 14:10:45.481418   14040 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.482077   14040 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 14:10:45.482077   14040 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 14:10:45.482077   14040 certs.go:256] generating profile certs ...
	I0528 14:10:45.482838   14040 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\client.key
	I0528 14:10:45.483388   14040 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42
	I0528 14:10:45.483660   14040 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.216.124 172.18.211.44 172.18.222.186 172.18.223.254]
	I0528 14:10:45.575472   14040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 ...
	I0528 14:10:45.575472   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42: {Name:mk80dd9cb1fecbbe16ab6061edd6aa6ccaf0f3dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.577535   14040 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42 ...
	I0528 14:10:45.577535   14040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42: {Name:mk3053e4efc73b34b3dee884f9a337bd870325c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 14:10:45.578147   14040 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt.e1a03d42 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt
	I0528 14:10:45.598789   14040 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key.e1a03d42 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key
	I0528 14:10:45.600311   14040 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key
	I0528 14:10:45.600311   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 14:10:45.600489   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 14:10:45.601299   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 14:10:45.601485   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 14:10:45.601731   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 14:10:45.602541   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 14:10:45.602601   14040 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 14:10:45.602601   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 14:10:45.602601   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 14:10:45.603453   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 14:10:45.603453   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 14:10:45.604359   14040 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 14:10:45.604668   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:45.604843   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 14:10:45.604843   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 14:10:45.604843   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:47.913881   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:47.913881   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:47.913970   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:50.657939   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:10:50.657939   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:50.659271   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:10:50.767503   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 14:10:50.775661   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 14:10:50.815145   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 14:10:50.822247   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0528 14:10:50.858198   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 14:10:50.865802   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 14:10:50.910123   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 14:10:50.918502   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0528 14:10:50.955979   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 14:10:50.963263   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 14:10:50.998151   14040 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 14:10:51.007586   14040 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0528 14:10:51.033245   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 14:10:51.085291   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 14:10:51.140192   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 14:10:51.197706   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 14:10:51.249704   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0528 14:10:51.297669   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 14:10:51.345931   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 14:10:51.394164   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-941700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 14:10:51.442278   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 14:10:51.491703   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 14:10:51.540828   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 14:10:51.588409   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 14:10:51.620626   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0528 14:10:51.652109   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 14:10:51.684053   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0528 14:10:51.716676   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 14:10:51.748953   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0528 14:10:51.780608   14040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 14:10:51.827385   14040 ssh_runner.go:195] Run: openssl version
	I0528 14:10:51.849803   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 14:10:51.885467   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.892711   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.905580   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 14:10:51.928874   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 14:10:51.963888   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 14:10:51.997510   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.007360   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.021635   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 14:10:52.044815   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 14:10:52.080016   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 14:10:52.115599   14040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.122722   14040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.134614   14040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 14:10:52.158327   14040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 14:10:52.194034   14040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 14:10:52.202404   14040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 14:10:52.202404   14040 kubeadm.go:928] updating node {m03 172.18.222.186 8443 v1.30.1 docker true true} ...
	I0528 14:10:52.202404   14040 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-941700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.222.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 14:10:52.203001   14040 kube-vip.go:115] generating kube-vip config ...
	I0528 14:10:52.216049   14040 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 14:10:52.249679   14040 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 14:10:52.249832   14040 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.18.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 14:10:52.261884   14040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 14:10:52.281163   14040 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 14:10:52.298176   14040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0528 14:10:52.317249   14040 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0528 14:10:52.317249   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:10:52.317249   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:10:52.333951   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 14:10:52.333951   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 14:10:52.335828   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:10:52.341481   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 14:10:52.341481   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 14:10:52.342734   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 14:10:52.342734   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 14:10:52.376727   14040 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:10:52.391477   14040 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 14:10:52.459721   14040 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 14:10:52.460024   14040 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 14:10:53.670762   14040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 14:10:53.694732   14040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0528 14:10:53.732373   14040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 14:10:53.764612   14040 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 14:10:53.809669   14040 ssh_runner.go:195] Run: grep 172.18.223.254	control-plane.minikube.internal$ /etc/hosts
	I0528 14:10:53.816209   14040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 14:10:53.857067   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:10:54.063162   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:10:54.095410   14040 host.go:66] Checking if "ha-941700" exists ...
	I0528 14:10:54.096288   14040 start.go:316] joinCluster: &{Name:ha-941700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-941700 Namespace:default APIServerHAVIP:172.18.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.216.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.211.44 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 14:10:54.096319   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 14:10:54.096319   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-941700 ).state
	I0528 14:10:56.322618   14040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 14:10:56.322618   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:56.322978   14040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-941700 ).networkadapters[0]).ipaddresses[0]
	I0528 14:10:59.093663   14040 main.go:141] libmachine: [stdout =====>] : 172.18.216.124
	
	I0528 14:10:59.093663   14040 main.go:141] libmachine: [stderr =====>] : 
	I0528 14:10:59.093896   14040 sshutil.go:53] new ssh client: &{IP:172.18.216.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-941700\id_rsa Username:docker}
	I0528 14:10:59.304969   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2086388s)
	I0528 14:10:59.304969   14040 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:10:59.304969   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 80b2w8.hbne8fn0cjnhnf5r --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m03 --control-plane --apiserver-advertise-address=172.18.222.186 --apiserver-bind-port=8443"
	I0528 14:11:46.419428   14040 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 80b2w8.hbne8fn0cjnhnf5r --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-941700-m03 --control-plane --apiserver-advertise-address=172.18.222.186 --apiserver-bind-port=8443": (47.1143602s)
	I0528 14:11:46.419428   14040 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 14:11:47.203711   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-941700-m03 minikube.k8s.io/updated_at=2024_05_28T14_11_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-941700 minikube.k8s.io/primary=false
	I0528 14:11:47.382287   14040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-941700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 14:11:47.532104   14040 start.go:318] duration metric: took 53.4357045s to joinCluster
	I0528 14:11:47.532355   14040 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.18.222.186 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 14:11:47.534475   14040 out.go:177] * Verifying Kubernetes components...
	I0528 14:11:47.532581   14040 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 14:11:47.553161   14040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 14:11:47.990467   14040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 14:11:48.029062   14040 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 14:11:48.029062   14040 kapi.go:59] client config for ha-941700: &rest.Config{Host:"https://172.18.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-941700\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 14:11:48.029062   14040 kubeadm.go:477] Overriding stale ClientConfig host https://172.18.223.254:8443 with https://172.18.216.124:8443
	I0528 14:11:48.031083   14040 node_ready.go:35] waiting up to 6m0s for node "ha-941700-m03" to be "Ready" ...
	I0528 14:11:48.031214   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:48.031214   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:48.031214   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:48.031214   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:48.045609   14040 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0528 14:11:48.535859   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:48.535859   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:48.535859   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:48.535859   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:48.553739   14040 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0528 14:11:49.044775   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:49.044775   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:49.044775   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:49.044775   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:49.051459   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:11:49.536242   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:49.536242   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:49.536242   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:49.536242   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:49.542975   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:11:50.037502   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:50.037502   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:50.037502   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:50.037502   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:50.045133   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:11:50.045903   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:50.541887   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:50.541887   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:50.541887   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:50.541887   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:50.553261   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:11:51.034885   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:51.034885   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:51.034885   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:51.034885   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:51.039554   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:51.540665   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:51.540665   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:51.540665   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:51.540665   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:51.688248   14040 round_trippers.go:574] Response Status: 200 OK in 147 milliseconds
	I0528 14:11:52.034872   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:52.034872   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:52.034872   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:52.034872   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:52.040823   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:52.536051   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:52.536051   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:52.536051   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:52.536153   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:52.563510   14040 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0528 14:11:52.564576   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:53.041417   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:53.041499   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:53.041499   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:53.041499   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:53.045924   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:53.531738   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:53.531955   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:53.531955   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:53.531955   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:53.536888   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:54.038486   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:54.038486   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:54.038486   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:54.038486   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:54.043102   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:54.534745   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:54.534745   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:54.534745   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:54.534745   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:54.540306   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:55.035907   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:55.035907   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:55.035907   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:55.035907   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:55.041538   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:55.042335   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:55.538860   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:55.539055   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:55.539055   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:55.539055   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:55.544656   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:56.041032   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:56.041289   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:56.041289   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:56.041289   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:56.045662   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:56.545325   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:56.545637   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:56.545637   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:56.545637   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:56.553090   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:11:57.042251   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:57.042329   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:57.042329   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:57.042329   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:57.047601   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:11:57.048718   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:11:57.541691   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:57.541691   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:57.541691   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:57.541802   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:11:57.546506   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:11:58.031776   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:11:58.032130   14040 round_trippers.go:469] Request Headers:
	I0528 14:11:58.032130   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:11:58.032130   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.117053   14040 round_trippers.go:574] Response Status: 200 OK in 9084 milliseconds
	I0528 14:12:07.125417   14040 node_ready.go:53] node "ha-941700-m03" has status "Ready":"False"
	I0528 14:12:07.125417   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.125417   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.125417   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.125417   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.184201   14040 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I0528 14:12:07.185493   14040 node_ready.go:49] node "ha-941700-m03" has status "Ready":"True"
	I0528 14:12:07.185548   14040 node_ready.go:38] duration metric: took 19.1543708s for node "ha-941700-m03" to be "Ready" ...
	I0528 14:12:07.185639   14040 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:12:07.185696   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:07.185696   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.185696   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.185696   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.255235   14040 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0528 14:12:07.266538   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.266538   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8prdw
	I0528 14:12:07.266538   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.266538   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.266538   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.285610   14040 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0528 14:12:07.287670   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.287728   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.287728   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.287728   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.320544   14040 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0528 14:12:07.321619   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.321619   14040 pod_ready.go:81] duration metric: took 55.0811ms for pod "coredns-7db6d8ff4d-8prdw" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.321619   14040 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.321758   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kmzqb
	I0528 14:12:07.321908   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.321908   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.321908   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.329927   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:12:07.330869   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.330955   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.331019   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.331019   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.335126   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:07.336132   14040 pod_ready.go:92] pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.336132   14040 pod_ready.go:81] duration metric: took 14.4209ms for pod "coredns-7db6d8ff4d-kmzqb" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.336132   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.336132   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700
	I0528 14:12:07.336132   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.336132   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.336132   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.352316   14040 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0528 14:12:07.355029   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.355145   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.355204   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.355204   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.363406   14040 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 14:12:07.364411   14040 pod_ready.go:92] pod "etcd-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.364411   14040 pod_ready.go:81] duration metric: took 28.2795ms for pod "etcd-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.364411   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.364411   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m02
	I0528 14:12:07.364411   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.364411   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.364411   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.371131   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:07.372886   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:07.372886   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.372886   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.372886   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.376930   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:07.378681   14040 pod_ready.go:92] pod "etcd-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.378766   14040 pod_ready.go:81] duration metric: took 14.2701ms for pod "etcd-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.378766   14040 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.378824   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/etcd-ha-941700-m03
	I0528 14:12:07.378890   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.378890   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.378890   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.384092   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:07.533925   14040 request.go:629] Waited for 148.5561ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.533987   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:07.533987   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.533987   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.533987   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.540217   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:07.541992   14040 pod_ready.go:92] pod "etcd-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.541992   14040 pod_ready.go:81] duration metric: took 163.2252ms for pod "etcd-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.541992   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.737071   14040 request.go:629] Waited for 194.9037ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:12:07.737308   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700
	I0528 14:12:07.737381   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.737381   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.737461   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.745390   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:07.926364   14040 request.go:629] Waited for 179.4381ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.926479   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:07.926479   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:07.926479   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:07.926479   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:07.931637   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:07.932326   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:07.932326   14040 pod_ready.go:81] duration metric: took 390.3332ms for pod "kube-apiserver-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:07.932326   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.129401   14040 request.go:629] Waited for 197.0745ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:12:08.129626   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m02
	I0528 14:12:08.129626   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.129718   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.129718   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.135126   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:08.334541   14040 request.go:629] Waited for 198.3373ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:08.334890   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:08.335006   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.335006   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.335006   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.340224   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:08.340984   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:08.340984   14040 pod_ready.go:81] duration metric: took 408.657ms for pod "kube-apiserver-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.341581   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.539548   14040 request.go:629] Waited for 197.6541ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m03
	I0528 14:12:08.539740   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-941700-m03
	I0528 14:12:08.539740   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.539740   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.539843   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.547350   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:08.728678   14040 request.go:629] Waited for 179.0175ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:08.729032   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:08.729032   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.729032   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.729032   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.734213   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:08.734891   14040 pod_ready.go:92] pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:08.734949   14040 pod_ready.go:81] duration metric: took 393.3666ms for pod "kube-apiserver-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.734949   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:08.933323   14040 request.go:629] Waited for 198.0368ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:12:08.933504   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700
	I0528 14:12:08.933504   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:08.933504   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:08.933504   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:08.939531   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.137030   14040 request.go:629] Waited for 195.9391ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:09.137156   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:09.137156   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.137184   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.137184   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.141811   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:09.143290   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.143290   14040 pod_ready.go:81] duration metric: took 408.3407ms for pod "kube-controller-manager-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.143290   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.341149   14040 request.go:629] Waited for 197.6612ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:12:09.341313   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m02
	I0528 14:12:09.341313   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.341313   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.341313   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.347203   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.528596   14040 request.go:629] Waited for 179.3817ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:09.528596   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:09.528596   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.528596   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.528596   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.534262   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.535775   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.535775   14040 pod_ready.go:81] duration metric: took 392.4836ms for pod "kube-controller-manager-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.535775   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.734653   14040 request.go:629] Waited for 198.6867ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m03
	I0528 14:12:09.734653   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-941700-m03
	I0528 14:12:09.734898   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.734898   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.734898   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.739941   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:09.937627   14040 request.go:629] Waited for 195.7272ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:09.937842   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:09.937934   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:09.937934   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:09.937934   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:09.943548   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:09.944100   14040 pod_ready.go:92] pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:09.944230   14040 pod_ready.go:81] duration metric: took 408.3244ms for pod "kube-controller-manager-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:09.944230   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dcrq6" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.125831   14040 request.go:629] Waited for 181.5347ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcrq6
	I0528 14:12:10.125831   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dcrq6
	I0528 14:12:10.125831   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.125831   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.125831   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.133255   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:10.331222   14040 request.go:629] Waited for 196.6116ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:10.331573   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:10.331573   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.331638   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.331638   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.341278   14040 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 14:12:10.341991   14040 pod_ready.go:92] pod "kube-proxy-dcrq6" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:10.342120   14040 pod_ready.go:81] duration metric: took 397.8887ms for pod "kube-proxy-dcrq6" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.342120   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.534582   14040 request.go:629] Waited for 192.3229ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:12:10.534888   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrwn5
	I0528 14:12:10.534888   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.534954   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.534954   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.540937   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:10.736462   14040 request.go:629] Waited for 194.4538ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:10.736724   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:10.736724   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.736788   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.736788   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.742129   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:10.744144   14040 pod_ready.go:92] pod "kube-proxy-hrwn5" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:10.744144   14040 pod_ready.go:81] duration metric: took 402.0235ms for pod "kube-proxy-hrwn5" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.744251   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:10.925832   14040 request.go:629] Waited for 181.3764ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:12:10.925980   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-spzwh
	I0528 14:12:10.925980   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:10.925980   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:10.925980   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:10.931779   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.128697   14040 request.go:629] Waited for 195.8042ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.128950   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.129015   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.129076   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.129076   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.134195   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.135117   14040 pod_ready.go:92] pod "kube-proxy-spzwh" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.135117   14040 pod_ready.go:81] duration metric: took 390.8657ms for pod "kube-proxy-spzwh" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.135117   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.331020   14040 request.go:629] Waited for 195.5592ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:12:11.331129   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700
	I0528 14:12:11.331129   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.331129   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.331129   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.337498   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.535769   14040 request.go:629] Waited for 197.5111ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:11.535769   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700
	I0528 14:12:11.535769   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.535769   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.535769   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.542028   14040 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 14:12:11.543285   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.543285   14040 pod_ready.go:81] duration metric: took 408.1672ms for pod "kube-scheduler-ha-941700" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.543285   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.737741   14040 request.go:629] Waited for 194.2819ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:12:11.737955   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m02
	I0528 14:12:11.737955   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.737955   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.738048   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.742817   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:11.940686   14040 request.go:629] Waited for 195.9707ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.940686   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m02
	I0528 14:12:11.940686   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:11.940686   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:11.940686   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:11.946254   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:11.947406   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:11.947472   14040 pod_ready.go:81] duration metric: took 404.1862ms for pod "kube-scheduler-ha-941700-m02" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:11.947472   14040 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:12.126585   14040 request.go:629] Waited for 178.8135ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m03
	I0528 14:12:12.126683   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-941700-m03
	I0528 14:12:12.126683   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.126683   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.126769   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.131238   14040 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 14:12:12.330331   14040 request.go:629] Waited for 197.3557ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:12.330557   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes/ha-941700-m03
	I0528 14:12:12.330557   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.330557   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.330557   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.336050   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:12.337061   14040 pod_ready.go:92] pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 14:12:12.337120   14040 pod_ready.go:81] duration metric: took 389.6467ms for pod "kube-scheduler-ha-941700-m03" in "kube-system" namespace to be "Ready" ...
	I0528 14:12:12.337186   14040 pod_ready.go:38] duration metric: took 5.1515363s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 14:12:12.337271   14040 api_server.go:52] waiting for apiserver process to appear ...
	I0528 14:12:12.348972   14040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 14:12:12.381471   14040 api_server.go:72] duration metric: took 24.8490637s to wait for apiserver process to appear ...
	I0528 14:12:12.381538   14040 api_server.go:88] waiting for apiserver healthz status ...
	I0528 14:12:12.381538   14040 api_server.go:253] Checking apiserver healthz at https://172.18.216.124:8443/healthz ...
	I0528 14:12:12.392167   14040 api_server.go:279] https://172.18.216.124:8443/healthz returned 200:
	ok
	I0528 14:12:12.392910   14040 round_trippers.go:463] GET https://172.18.216.124:8443/version
	I0528 14:12:12.392910   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.393025   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.393025   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.394363   14040 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 14:12:12.395346   14040 api_server.go:141] control plane version: v1.30.1
	I0528 14:12:12.395346   14040 api_server.go:131] duration metric: took 13.8084ms to wait for apiserver health ...
	I0528 14:12:12.395346   14040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 14:12:12.531971   14040 request.go:629] Waited for 136.514ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.532188   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.532188   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.532301   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.532301   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.546358   14040 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0528 14:12:12.562536   14040 system_pods.go:59] 24 kube-system pods found
	I0528 14:12:12.562536   14040 system_pods.go:61] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:12:12.562536   14040 system_pods.go:61] "etcd-ha-941700-m03" [b4e953b9-a22c-4c90-a52d-243d7cfd83f9] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-6jgcj" [7f58e46d-d3dd-4975-b817-78111c22997c] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-apiserver-ha-941700-m03" [7fba6735-c3de-4f9e-807e-2f47a6b9f4a6] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-controller-manager-ha-941700-m03" [33718de3-4958-448b-be70-b6a049144c09] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-proxy-dcrq6" [a4393658-9e64-4987-9075-00b9be537c3d] Running
	I0528 14:12:12.562703   14040 system_pods.go:61] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:12:12.562848   14040 system_pods.go:61] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:12:12.562921   14040 system_pods.go:61] "kube-scheduler-ha-941700-m03" [2f21f70c-3b6b-4d06-a663-219b78219ab9] Running
	I0528 14:12:12.562945   14040 system_pods.go:61] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:12:12.562945   14040 system_pods.go:61] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:12:12.562970   14040 system_pods.go:61] "kube-vip-ha-941700-m03" [9e49b9d0-1b80-483f-8f7f-543775fdab48] Running
	I0528 14:12:12.562970   14040 system_pods.go:61] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:12:12.562970   14040 system_pods.go:74] duration metric: took 167.6232ms to wait for pod list to return data ...
	I0528 14:12:12.562970   14040 default_sa.go:34] waiting for default service account to be created ...
	I0528 14:12:12.734527   14040 request.go:629] Waited for 171.3495ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:12:12.734786   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/default/serviceaccounts
	I0528 14:12:12.734862   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.734862   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.734918   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.740542   14040 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 14:12:12.741251   14040 default_sa.go:45] found service account: "default"
	I0528 14:12:12.741251   14040 default_sa.go:55] duration metric: took 178.2806ms for default service account to be created ...
	I0528 14:12:12.741251   14040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 14:12:12.936575   14040 request.go:629] Waited for 195.0904ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.936759   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/namespaces/kube-system/pods
	I0528 14:12:12.936759   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:12.936759   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:12.936759   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:12.946856   14040 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 14:12:12.958404   14040 system_pods.go:86] 24 kube-system pods found
	I0528 14:12:12.958404   14040 system_pods.go:89] "coredns-7db6d8ff4d-8prdw" [4736eb43-2c5f-4edc-8070-bd2f4cb03c25] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "coredns-7db6d8ff4d-kmzqb" [347c74c7-9b8d-4b7b-976e-4723c0a1c7f6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700" [4fa0a90e-5c48-4186-858f-b1dfd9eaa2e8] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700-m02" [69dfe795-b2cd-49c9-bb3c-47e6d46c2411] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "etcd-ha-941700-m03" [b4e953b9-a22c-4c90-a52d-243d7cfd83f9] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-6jgcj" [7f58e46d-d3dd-4975-b817-78111c22997c] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-whc47" [82a221e0-726a-4f37-8f25-1a3f674f0dd0] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kindnet-xrp24" [8f9e51b9-76c6-4a4b-a13e-9dce573292e4] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700" [39e10d5a-cc1a-44bd-9737-49657f744c72] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700-m02" [1a121e31-902b-4f9a-8ab7-b0318bb3b710] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-apiserver-ha-941700-m03" [7fba6735-c3de-4f9e-807e-2f47a6b9f4a6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700" [7bd528f3-312d-4e0c-8d7d-ad91436d1fe1] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m02" [8d22254a-beac-4bcf-bee3-7bc9b316fee5] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-controller-manager-ha-941700-m03" [33718de3-4958-448b-be70-b6a049144c09] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-dcrq6" [a4393658-9e64-4987-9075-00b9be537c3d] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-hrwn5" [b8766939-5796-4541-b98e-7ebd6f67cd88] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-proxy-spzwh" [00625372-7eda-456b-8444-2c85da29285c] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700" [7069782c-919c-4bf2-9c1b-2b5c0c88bbd6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700-m02" [a4bdc26c-1a2a-4ffe-ba5f-8214c9fa7de4] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-scheduler-ha-941700-m03" [2f21f70c-3b6b-4d06-a663-219b78219ab9] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700" [f3ae818c-50ad-43ac-9b57-e335c92bf829] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700-m02" [3881b05b-4814-4dbc-a948-6de440748ba6] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "kube-vip-ha-941700-m03" [9e49b9d0-1b80-483f-8f7f-543775fdab48] Running
	I0528 14:12:12.958404   14040 system_pods.go:89] "storage-provisioner" [b958202b-ad05-4846-b47c-92c22aa98e03] Running
	I0528 14:12:12.958404   14040 system_pods.go:126] duration metric: took 217.1529ms to wait for k8s-apps to be running ...
	I0528 14:12:12.958928   14040 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 14:12:12.970752   14040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 14:12:13.001254   14040 system_svc.go:56] duration metric: took 42.8495ms WaitForService to wait for kubelet
	I0528 14:12:13.001325   14040 kubeadm.go:576] duration metric: took 25.4689166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 14:12:13.001325   14040 node_conditions.go:102] verifying NodePressure condition ...
	I0528 14:12:13.138507   14040 request.go:629] Waited for 137.0428ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.216.124:8443/api/v1/nodes
	I0528 14:12:13.138507   14040 round_trippers.go:463] GET https://172.18.216.124:8443/api/v1/nodes
	I0528 14:12:13.138507   14040 round_trippers.go:469] Request Headers:
	I0528 14:12:13.138507   14040 round_trippers.go:473]     Accept: application/json, */*
	I0528 14:12:13.138729   14040 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 14:12:13.146717   14040 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 14:12:13.149173   14040 node_conditions.go:123] node cpu capacity is 2
	I0528 14:12:13.149173   14040 node_conditions.go:105] duration metric: took 147.8473ms to run NodePressure ...
	I0528 14:12:13.149317   14040 start.go:240] waiting for startup goroutines ...
	I0528 14:12:13.149349   14040 start.go:254] writing updated cluster config ...
	I0528 14:12:13.163447   14040 ssh_runner.go:195] Run: rm -f paused
	I0528 14:12:13.311138   14040 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 14:12:13.316040   14040 out.go:177] * Done! kubectl is now configured to use "ha-941700" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.045529301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.223303766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.224053094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.224321405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.225773459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241011330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241482948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.241670855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:04:14 ha-941700 dockerd[1330]: time="2024-05-28T21:04:14.242036869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209486199Z" level=info msg="shim disconnected" id=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209586001Z" level=warning msg="cleaning up after shim disconnected" id=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.209601801Z" level=info msg="cleaning up dead shim" namespace=moby
	May 28 21:12:07 ha-941700 dockerd[1324]: time="2024-05-28T21:12:07.210634620Z" level=info msg="ignoring event" container=1875bf92b81108846fdcdf445dfafdb4c9db47d7ecffe9d402904776fc860a1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.547789729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548141636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548170536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:12:07 ha-941700 dockerd[1330]: time="2024-05-28T21:12:07.548355540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 21:14:01 ha-941700 dockerd[1324]: 2024/05/28 21:14:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:01 ha-941700 dockerd[1324]: 2024/05/28 21:14:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:01 ha-941700 dockerd[1324]: 2024/05/28 21:14:01 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:02 ha-941700 dockerd[1324]: 2024/05/28 21:14:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:02 ha-941700 dockerd[1324]: 2024/05/28 21:14:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:02 ha-941700 dockerd[1324]: 2024/05/28 21:14:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:02 ha-941700 dockerd[1324]: 2024/05/28 21:14:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 21:14:02 ha-941700 dockerd[1324]: 2024/05/28 21:14:02 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81c63fae6c15c       38af8ddebf499                                                                                       18 minutes ago      Running             kube-vip                  1                   9796e3164ede7       kube-vip-ha-941700
	4c5c453eb0b63       cbb01a7bd410d                                                                                       26 minutes ago      Running             coredns                   0                   bf302748dab42       coredns-7db6d8ff4d-8prdw
	980d85de8140e       cbb01a7bd410d                                                                                       26 minutes ago      Running             coredns                   0                   7fce36137fb0f       coredns-7db6d8ff4d-kmzqb
	eb75bacafe16d       6e38f40d628db                                                                                       26 minutes ago      Running             storage-provisioner       0                   f5c08e7fa1a99       storage-provisioner
	e136369b3a3f9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8            26 minutes ago      Running             kindnet-cni               0                   7ab827b080443       kindnet-xrp24
	2dc916ecd7fb9       747097150317f                                                                                       26 minutes ago      Running             kube-proxy                0                   547af2f01c379       kube-proxy-hrwn5
	1875bf92b8110       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   27 minutes ago      Exited              kube-vip                  0                   9796e3164ede7       kube-vip-ha-941700
	82481386243ca       a52dc94f0a912                                                                                       27 minutes ago      Running             kube-scheduler            0                   2035d37daac72       kube-scheduler-ha-941700
	c9264f5e7db1c       91be940803172                                                                                       27 minutes ago      Running             kube-apiserver            0                   8fa7440bbfb1e       kube-apiserver-ha-941700
	2d44d675f6ae3       25a1387cdab82                                                                                       27 minutes ago      Running             kube-controller-manager   0                   5e4b184ad50ca       kube-controller-manager-ha-941700
	0da681cf1666d       3861cfcd7c04c                                                                                       27 minutes ago      Running             etcd                      0                   d3e8f4f13a6d2       etcd-ha-941700
	
	
	==> coredns [4c5c453eb0b6] <==
	[INFO] 10.244.1.2:49307 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002915721s
	[INFO] 10.244.1.2:55702 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001141s
	[INFO] 10.244.2.3:47815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120801s
	[INFO] 10.244.2.3:52357 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174801s
	[INFO] 10.244.2.3:60150 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000257702s
	[INFO] 10.244.2.3:38762 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000557s
	[INFO] 10.244.2.3:46638 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104101s
	[INFO] 10.244.2.2:35468 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000175201s
	[INFO] 10.244.2.2:52084 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000202501s
	[INFO] 10.244.2.2:46549 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086801s
	[INFO] 10.244.1.2:39482 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183202s
	[INFO] 10.244.1.2:48208 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001022s
	[INFO] 10.244.2.3:58380 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125801s
	[INFO] 10.244.2.3:47791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000617s
	[INFO] 10.244.2.3:58446 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075801s
	[INFO] 10.244.2.2:51786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205401s
	[INFO] 10.244.2.2:46402 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000668s
	[INFO] 10.244.2.2:46162 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000527s
	[INFO] 10.244.1.2:47633 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222302s
	[INFO] 10.244.2.3:41106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190501s
	[INFO] 10.244.2.3:49018 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000401002s
	[INFO] 10.244.2.3:35462 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000984s
	[INFO] 10.244.2.3:33923 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205101s
	[INFO] 10.244.2.2:53789 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156201s
	[INFO] 10.244.2.2:38081 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000623s
	
	
	==> coredns [980d85de8140] <==
	[INFO] 127.0.0.1:55048 - 14113 "HINFO IN 4658628224283164909.4753887424590601170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.080042402s
	[INFO] 10.244.1.2:45997 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.040510412s
	[INFO] 10.244.2.3:36534 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000073101s
	[INFO] 10.244.2.2:53757 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000076601s
	[INFO] 10.244.1.2:36086 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250802s
	[INFO] 10.244.1.2:41780 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197201s
	[INFO] 10.244.1.2:33788 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129501s
	[INFO] 10.244.1.2:48691 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159901s
	[INFO] 10.244.2.3:35600 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.002879521s
	[INFO] 10.244.2.3:38955 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000275302s
	[INFO] 10.244.2.3:56484 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000708s
	[INFO] 10.244.2.2:53277 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156101s
	[INFO] 10.244.2.2:56556 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000275302s
	[INFO] 10.244.2.2:48986 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014579306s
	[INFO] 10.244.2.2:43791 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072501s
	[INFO] 10.244.2.2:59645 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157801s
	[INFO] 10.244.1.2:58983 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139201s
	[INFO] 10.244.1.2:50857 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151501s
	[INFO] 10.244.2.3:46176 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124501s
	[INFO] 10.244.2.2:46419 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079101s
	[INFO] 10.244.1.2:46653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267202s
	[INFO] 10.244.1.2:35565 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127701s
	[INFO] 10.244.1.2:54940 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131901s
	[INFO] 10.244.2.2:39234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000235201s
	[INFO] 10.244.2.2:56420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000263301s
	
	
	==> describe nodes <==
	Name:               ha-941700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T14_03_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:03:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:28:19 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:28:19 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:28:19 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:28:19 +0000   Tue, 28 May 2024 21:13:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.216.124
	  Hostname:    ha-941700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f45878e3b64766bcabad456d3d63aa
	  System UUID:                5e2bb58e-529b-744d-ab43-67cbd66cc2ae
	  Boot ID:                    b7a84e0f-a48d-4650-8123-ce9427b898d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8prdw             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-kmzqb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-941700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-xrp24                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-941700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-941700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-hrwn5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-941700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-941700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-941700 event: Registered Node ha-941700 in Controller
	  Normal  NodeNotReady             18m                node-controller  Node ha-941700 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  17m (x2 over 27m)  kubelet          Node ha-941700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 27m)  kubelet          Node ha-941700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 27m)  kubelet          Node ha-941700 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m (x2 over 26m)  kubelet          Node ha-941700 status is now: NodeReady
	
	
	Name:               ha-941700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T14_07_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:07:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:30:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 21:28:40 +0000   Tue, 28 May 2024 21:30:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 21:28:40 +0000   Tue, 28 May 2024 21:30:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 21:28:40 +0000   Tue, 28 May 2024 21:30:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 21:28:40 +0000   Tue, 28 May 2024 21:30:43 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.211.44
	  Hostname:    ha-941700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c30279926374dd3bd8f665d217f3bc0
	  System UUID:                2ad5eb6b-910f-c94b-929c-5d11f4308df3
	  Boot ID:                    c87e1b84-e6c5-4f4f-b7a2-9d6bcfe12067
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qnh86                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-941700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-whc47                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-941700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-941700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-spzwh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-941700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-941700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-941700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-941700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-941700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node ha-941700-m02 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-941700-m02 event: Registered Node ha-941700-m02 in Controller
	  Normal  NodeNotReady             18s                node-controller  Node ha-941700-m02 status is now: NodeNotReady
	
	
	Name:               ha-941700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T14_11_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:11:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:31:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:28:36 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:28:36 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:28:36 +0000   Tue, 28 May 2024 21:11:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:28:36 +0000   Tue, 28 May 2024 21:11:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.222.186
	  Hostname:    ha-941700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2485f1b4ca7a4f19afcda00949d8cd5c
	  System UUID:                5a5be4cd-23e0-cb46-9f05-2d08965906d1
	  Boot ID:                    1f46f7a2-aeb7-4b43-a116-0ff9dc1ac984
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s2cwz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  default                     busybox-fc5497c4f-xbkdf                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-941700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-6jgcj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-941700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-941700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-dcrq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-941700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-941700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-941700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-941700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-941700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-941700-m03 event: Registered Node ha-941700-m03 in Controller
	
	
	Name:               ha-941700-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-941700-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-941700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T14_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:17:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-941700-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:30:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:28:06 +0000   Tue, 28 May 2024 21:17:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:28:06 +0000   Tue, 28 May 2024 21:17:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:28:06 +0000   Tue, 28 May 2024 21:17:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:28:06 +0000   Tue, 28 May 2024 21:17:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.217.46
	  Hostname:    ha-941700-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d115e8f15f4624bfdcac7cd331dcf7
	  System UUID:                661e86a2-4799-784e-9046-761af652bb4d
	  Boot ID:                    6fa551b9-1ccb-4bb0-ade0-95e31dfabdfd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4wsvw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-8krzg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           13m                node-controller  Node ha-941700-m04 event: Registered Node ha-941700-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-941700-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-941700-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-941700-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-941700-m04 event: Registered Node ha-941700-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-941700-m04 event: Registered Node ha-941700-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-941700-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.685793] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 21:02] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.179611] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[May28 21:03] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.109259] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.552972] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.187720] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.241288] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.828087] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.183973] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.211170] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.289907] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[ +11.485671] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.110944] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.506063] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +5.919021] systemd-fstab-generator[1711]: Ignoring "noauto" option for root device
	[  +0.107295] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.816139] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.735004] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[May28 21:04] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.685830] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.697655] kauditd_printk_skb: 33 callbacks suppressed
	[May28 21:06] hrtimer: interrupt took 10078045 ns
	
	
	==> etcd [0da681cf1666] <==
	{"level":"warn","ts":"2024-05-28T21:31:01.347247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.4256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.447889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.699644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.723989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.734216Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.752927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.753168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.762776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.767829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.778568Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.787137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.798206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.803733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.808574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.8191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.827561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.837078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.842242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.846525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.846716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.8555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.864599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.87373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T21:31:01.9475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b951b2df3e986c67","from":"b951b2df3e986c67","remote-peer-id":"9b4da77565cf22d6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:31:01 up 29 min,  0 users,  load average: 1.09, 0.49, 0.37
	Linux ha-941700 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e136369b3a3f] <==
	I0528 21:30:28.751556       1 main.go:250] Node ha-941700-m04 has CIDR [10.244.3.0/24] 
	I0528 21:30:38.767327       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:30:38.767374       1 main.go:227] handling current node
	I0528 21:30:38.767389       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:30:38.767395       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:30:38.768046       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:30:38.768080       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:30:38.768151       1 main.go:223] Handling node with IPs: map[172.18.217.46:{}]
	I0528 21:30:38.768232       1 main.go:250] Node ha-941700-m04 has CIDR [10.244.3.0/24] 
	I0528 21:30:48.785123       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:30:48.785439       1 main.go:227] handling current node
	I0528 21:30:48.785534       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:30:48.785742       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:30:48.786208       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:30:48.786270       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:30:48.786388       1 main.go:223] Handling node with IPs: map[172.18.217.46:{}]
	I0528 21:30:48.786414       1 main.go:250] Node ha-941700-m04 has CIDR [10.244.3.0/24] 
	I0528 21:30:58.801970       1 main.go:223] Handling node with IPs: map[172.18.216.124:{}]
	I0528 21:30:58.802031       1 main.go:227] handling current node
	I0528 21:30:58.802046       1 main.go:223] Handling node with IPs: map[172.18.211.44:{}]
	I0528 21:30:58.802053       1 main.go:250] Node ha-941700-m02 has CIDR [10.244.1.0/24] 
	I0528 21:30:58.802295       1 main.go:223] Handling node with IPs: map[172.18.222.186:{}]
	I0528 21:30:58.802390       1 main.go:250] Node ha-941700-m03 has CIDR [10.244.2.0/24] 
	I0528 21:30:58.802618       1 main.go:223] Handling node with IPs: map[172.18.217.46:{}]
	I0528 21:30:58.802820       1 main.go:250] Node ha-941700-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c9264f5e7db1] <==
	Trace[436061386]: ---"About to write a response" 7818ms (21:12:07.111)
	Trace[436061386]: [7.823339123s] [7.823339123s] END
	I0528 21:12:07.116156       1 trace.go:236] Trace[1678403227]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2cfca58c-dfe4-4058-814e-ed803fb1a818,client:172.18.222.186,api-group:coordination.k8s.io,api-version:v1,name:ha-941700-m03,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-941700-m03,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PUT (28-May-2024 21:12:01.011) (total time: 6104ms):
	Trace[1678403227]: ["GuaranteedUpdate etcd3" audit-id:2cfca58c-dfe4-4058-814e-ed803fb1a818,key:/leases/kube-node-lease/ha-941700-m03,type:*coordination.Lease,resource:leases.coordination.k8s.io 6104ms (21:12:01.011)
	Trace[1678403227]:  ---"Txn call completed" 6103ms (21:12:07.115)]
	Trace[1678403227]: [6.104674129s] [6.104674129s] END
	I0528 21:12:07.116179       1 trace.go:236] Trace[1079511571]: "Get" accept:application/json, */*,audit-id:3c63f981-8814-49c8-91ed-3d8854da900d,client:172.18.208.1,api-group:,api-version:v1,name:ha-941700-m03,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-941700-m03,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (28-May-2024 21:11:58.028) (total time: 9087ms):
	Trace[1079511571]: ---"About to write a response" 9083ms (21:12:07.111)
	Trace[1079511571]: [9.087559858s] [9.087559858s] END
	I0528 21:12:07.120541       1 trace.go:236] Trace[1782411227]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e1d4dce7-505f-4b5b-928e-067fa8421500,client:172.18.222.186,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:POST (28-May-2024 21:12:03.128) (total time: 3992ms):
	Trace[1782411227]: ["Create etcd3" audit-id:e1d4dce7-505f-4b5b-928e-067fa8421500,key:/events/kube-system/kube-apiserver-ha-941700-m03.17d3c36560491938,type:*core.Event,resource:events 3991ms (21:12:03.128)
	Trace[1782411227]:  ---"Txn call succeeded" 3991ms (21:12:07.120)]
	Trace[1782411227]: [3.992341324s] [3.992341324s] END
	I0528 21:12:07.121816       1 trace.go:236] Trace[653085208]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:93f18f0b-f2d8-45d0-acd4-a3505949c51b,client:172.18.223.254,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:POST (28-May-2024 21:12:00.855) (total time: 6266ms):
	Trace[653085208]: ["Create etcd3" audit-id:93f18f0b-f2d8-45d0-acd4-a3505949c51b,key:/events/kube-system/kube-apiserver-ha-941700.17d3c364d871e736,type:*core.Event,resource:events 6263ms (21:12:00.857)
	Trace[653085208]:  ---"Txn call succeeded" 6263ms (21:12:07.121)]
	Trace[653085208]: [6.266656279s] [6.266656279s] END
	I0528 21:12:07.166692       1 trace.go:236] Trace[1606402907]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f4479934-50a3-4d1f-8630-5949f97c53eb,client:172.18.222.186,api-group:,api-version:v1,name:ha-941700-m03,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-941700-m03/status,user-agent:kubelet/v1.30.1 (linux/amd64) kubernetes/6911225,verb:PATCH (28-May-2024 21:11:58.016) (total time: 9150ms):
	Trace[1606402907]: ["GuaranteedUpdate etcd3" audit-id:f4479934-50a3-4d1f-8630-5949f97c53eb,key:/minions/ha-941700-m03,type:*core.Node,resource:nodes 9150ms (21:11:58.016)
	Trace[1606402907]:  ---"About to Encode" 9016ms (21:12:07.033)
	Trace[1606402907]:  ---"Txn call completed" 132ms (21:12:07.165)]
	Trace[1606402907]: ---"About to check admission control" 9015ms (21:12:07.032)
	Trace[1606402907]: ---"Object stored in database" 133ms (21:12:07.166)
	Trace[1606402907]: [9.150500344s] [9.150500344s] END
	W0528 21:12:07.734511       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.211.44 172.18.216.124 172.18.222.186]
	
	
	==> kube-controller-manager [2d44d675f6ae] <==
	I0528 21:12:57.631345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="211.246693ms"
	I0528 21:12:57.667550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.140307ms"
	I0528 21:12:57.668232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.5µs"
	I0528 21:12:57.777904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.09333ms"
	I0528 21:12:57.779051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.1µs"
	I0528 21:12:58.907204       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-8f8hl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-8f8hl\": the object has been modified; please apply your changes to the latest version and try again"
	I0528 21:12:58.910183       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fe138b4a-dbe3-48c6-b1ae-6cb5ab4e5fea", APIVersion:"v1", ResourceVersion:"233", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-8f8hl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-8f8hl": the object has been modified; please apply your changes to the latest version and try again
	I0528 21:12:58.977416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.157526ms"
	I0528 21:12:58.977877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.701µs"
	I0528 21:12:59.049540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.856626ms"
	I0528 21:12:59.050752       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.001µs"
	I0528 21:13:00.713957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.25053ms"
	I0528 21:13:00.715295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.601µs"
	I0528 21:13:00.852272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.054801ms"
	I0528 21:13:00.853556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.7µs"
	I0528 21:13:00.951358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.267446ms"
	I0528 21:13:00.951618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.7µs"
	E0528 21:17:21.643992       1 certificate_controller.go:146] Sync csr-7jfmd failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-7jfmd": the object has been modified; please apply your changes to the latest version and try again
	I0528 21:17:21.715299       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-941700-m04\" does not exist"
	I0528 21:17:21.763541       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-941700-m04" podCIDRs=["10.244.3.0/24"]
	I0528 21:17:23.345487       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-941700-m04"
	I0528 21:17:40.157628       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-941700-m04"
	I0528 21:30:43.135712       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-941700-m04"
	I0528 21:30:43.368681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.842474ms"
	I0528 21:30:43.369416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.399µs"
	
	
	==> kube-proxy [2dc916ecd7fb] <==
	I0528 21:04:04.492831       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:04:04.516303       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.216.124"]
	I0528 21:04:04.591574       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:04:04.591732       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:04:04.591756       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:04:04.597217       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:04:04.598085       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:04:04.598119       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:04:04.599706       1 config.go:192] "Starting service config controller"
	I0528 21:04:04.599806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:04:04.600227       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:04:04.600257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:04:04.601175       1 config.go:319] "Starting node config controller"
	I0528 21:04:04.601363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:04:04.701534       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:04:04.701596       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:04:04.701611       1 shared_informer.go:320] Caches are synced for service config
	W0528 21:12:51.996363       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0528 21:12:51.996362       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0528 21:12:51.996435       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [82481386243c] <==
	E0528 21:03:46.332028       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:03:46.351278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:03:46.351529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 21:03:46.422939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:03:46.424926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:03:46.464076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:03:46.464323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:03:46.571263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 21:03:46.573121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 21:03:46.646270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 21:03:46.646617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 21:03:46.676236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 21:03:46.676591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 21:03:46.735637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:03:46.735998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0528 21:03:48.313973       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 21:11:40.247560       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vrmgz\": pod kindnet-vrmgz is already assigned to node \"ha-941700-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-vrmgz" node="ha-941700-m03"
	E0528 21:11:40.247673       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vrmgz\": pod kindnet-vrmgz is already assigned to node \"ha-941700-m03\"" pod="kube-system/kindnet-vrmgz"
	I0528 21:12:57.216396       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="7ae7d58b-be7a-4aae-9ef7-0c586e2e5c79" pod="default/busybox-fc5497c4f-qnh86" assumedNode="ha-941700-m02" currentNode="ha-941700-m03"
	E0528 21:12:57.245638       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xbkdf\": pod busybox-fc5497c4f-xbkdf is already assigned to node \"ha-941700-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xbkdf" node="ha-941700-m02"
	E0528 21:12:57.245929       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xbkdf\": pod busybox-fc5497c4f-xbkdf is already assigned to node \"ha-941700-m03\"" pod="default/busybox-fc5497c4f-xbkdf"
	E0528 21:12:57.249153       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qnh86\": pod busybox-fc5497c4f-qnh86 is already assigned to node \"ha-941700-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-qnh86" node="ha-941700-m03"
	E0528 21:12:57.249246       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7ae7d58b-be7a-4aae-9ef7-0c586e2e5c79(default/busybox-fc5497c4f-qnh86) was assumed on ha-941700-m03 but assigned to ha-941700-m02" pod="default/busybox-fc5497c4f-qnh86"
	E0528 21:12:57.249286       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-qnh86\": pod busybox-fc5497c4f-qnh86 is already assigned to node \"ha-941700-m02\"" pod="default/busybox-fc5497c4f-qnh86"
	I0528 21:12:57.249384       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-qnh86" node="ha-941700-m02"
	
	
	==> kubelet <==
	May 28 21:26:48 ha-941700 kubelet[2217]: E0528 21:26:48.559145    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:26:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:26:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:26:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:26:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:27:48 ha-941700 kubelet[2217]: E0528 21:27:48.560202    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:27:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:27:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:27:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:27:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:28:48 ha-941700 kubelet[2217]: E0528 21:28:48.558474    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:28:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:28:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:28:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:28:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:29:48 ha-941700 kubelet[2217]: E0528 21:29:48.559795    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:29:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:29:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:29:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:29:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:30:48 ha-941700 kubelet[2217]: E0528 21:30:48.558691    2217 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:30:48 ha-941700 kubelet[2217]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:30:48 ha-941700 kubelet[2217]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:30:48 ha-941700 kubelet[2217]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:30:48 ha-941700 kubelet[2217]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:30:53.702856    2688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-941700 -n ha-941700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-941700 -n ha-941700: (12.3527851s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-941700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (80.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- sh -c "ping -c 1 172.18.208.1"
E0528 15:08:29.660214    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- sh -c "ping -c 1 172.18.208.1": exit status 1 (10.3985003s)

                                                
                                                
-- stdout --
	PING 172.18.208.1 (172.18.208.1): 56 data bytes
	
	--- 172.18.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:08:25.525953    2460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.18.208.1) from pod (busybox-fc5497c4f-7bglt): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- sh -c "ping -c 1 172.18.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- sh -c "ping -c 1 172.18.208.1": exit status 1 (10.4146979s)

                                                
                                                
-- stdout --
	PING 172.18.208.1 (172.18.208.1): 56 data bytes
	
	--- 172.18.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:08:36.385566    2208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.18.208.1) from pod (busybox-fc5497c4f-r5pc4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-517300 -n multinode-517300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-517300 -n multinode-517300: (11.6482653s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 logs -n 25: (8.2208639s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-682100 ssh -- ls                    | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 14:57 PDT | 28 May 24 14:57 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-682100                           | mount-start-1-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 14:57 PDT | 28 May 24 14:57 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-682100 ssh -- ls                    | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 14:57 PDT | 28 May 24 14:58 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-682100                           | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 14:58 PDT | 28 May 24 14:58 PDT |
	| start   | -p mount-start-2-682100                           | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 14:58 PDT | 28 May 24 15:00 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 15:00 PDT |                     |
	|         | --profile mount-start-2-682100 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-682100 ssh -- ls                    | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 15:00 PDT | 28 May 24 15:00 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-682100                           | mount-start-2-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 15:00 PDT | 28 May 24 15:01 PDT |
	| delete  | -p mount-start-1-682100                           | mount-start-1-682100 | minikube1\jenkins | v1.33.1 | 28 May 24 15:01 PDT | 28 May 24 15:01 PDT |
	| start   | -p multinode-517300                               | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:01 PDT | 28 May 24 15:07 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- apply -f                   | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- rollout                    | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- get pods -o                | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- get pods -o                | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-7bglt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-r5pc4 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-7bglt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-r5pc4 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-7bglt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-r5pc4 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- get pods -o                | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-7bglt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT |                     |
	|         | busybox-fc5497c4f-7bglt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.208.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT | 28 May 24 15:08 PDT |
	|         | busybox-fc5497c4f-r5pc4                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-517300 -- exec                       | multinode-517300     | minikube1\jenkins | v1.33.1 | 28 May 24 15:08 PDT |                     |
	|         | busybox-fc5497c4f-r5pc4 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.18.208.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 15:01:11
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 15:01:11.163400   12872 out.go:291] Setting OutFile to fd 692 ...
	I0528 15:01:11.164199   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:01:11.164199   12872 out.go:304] Setting ErrFile to fd 924...
	I0528 15:01:11.164199   12872 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:01:11.188153   12872 out.go:298] Setting JSON to false
	I0528 15:01:11.191721   12872 start.go:129] hostinfo: {"hostname":"minikube1","uptime":6435,"bootTime":1716927235,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 15:01:11.191721   12872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 15:01:11.197459   12872 out.go:177] * [multinode-517300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 15:01:11.202466   12872 notify.go:220] Checking for updates...
	I0528 15:01:11.202466   12872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:01:11.205248   12872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 15:01:11.207677   12872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 15:01:11.209614   12872 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 15:01:11.212122   12872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 15:01:11.216109   12872 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:01:11.216109   12872 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 15:01:16.564947   12872 out.go:177] * Using the hyperv driver based on user configuration
	I0528 15:01:16.568214   12872 start.go:297] selected driver: hyperv
	I0528 15:01:16.568757   12872 start.go:901] validating driver "hyperv" against <nil>
	I0528 15:01:16.568757   12872 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 15:01:16.620371   12872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 15:01:16.622056   12872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:01:16.622056   12872 cni.go:84] Creating CNI manager for ""
	I0528 15:01:16.622056   12872 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0528 15:01:16.622056   12872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 15:01:16.622678   12872 start.go:340] cluster config:
	{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:01:16.622678   12872 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 15:01:16.627831   12872 out.go:177] * Starting "multinode-517300" primary control-plane node in "multinode-517300" cluster
	I0528 15:01:16.630419   12872 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:01:16.630419   12872 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 15:01:16.630419   12872 cache.go:56] Caching tarball of preloaded images
	I0528 15:01:16.630419   12872 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:01:16.630419   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:01:16.631464   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:01:16.631632   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json: {Name:mk315763fbb1d1d65c5834d51a2063a74fcd4d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:01:16.633196   12872 start.go:360] acquireMachinesLock for multinode-517300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:01:16.633346   12872 start.go:364] duration metric: took 149.9µs to acquireMachinesLock for "multinode-517300"
	I0528 15:01:16.633537   12872 start.go:93] Provisioning new machine with config: &{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 15:01:16.633537   12872 start.go:125] createHost starting for "" (driver="hyperv")
	I0528 15:01:16.637746   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 15:01:16.638939   12872 start.go:159] libmachine.API.Create for "multinode-517300" (driver="hyperv")
	I0528 15:01:16.638939   12872 client.go:168] LocalClient.Create starting
	I0528 15:01:16.639328   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 15:01:16.639865   12872 main.go:141] libmachine: Decoding PEM data...
	I0528 15:01:16.639865   12872 main.go:141] libmachine: Parsing certificate...
	I0528 15:01:16.640246   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 15:01:16.640485   12872 main.go:141] libmachine: Decoding PEM data...
	I0528 15:01:16.640485   12872 main.go:141] libmachine: Parsing certificate...
	I0528 15:01:16.640680   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 15:01:18.701130   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 15:01:18.701130   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:18.701282   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 15:01:20.407001   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 15:01:20.407524   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:20.407610   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 15:01:21.845912   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 15:01:21.846159   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:21.846159   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 15:01:25.540629   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 15:01:25.540629   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:25.543235   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 15:01:26.034694   12872 main.go:141] libmachine: Creating SSH key...
	I0528 15:01:26.264659   12872 main.go:141] libmachine: Creating VM...
	I0528 15:01:26.264659   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 15:01:29.135410   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 15:01:29.136265   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:29.136265   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0528 15:01:29.136265   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 15:01:30.849101   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 15:01:30.849591   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:30.849591   12872 main.go:141] libmachine: Creating VHD
	I0528 15:01:30.849678   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 15:01:34.627376   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 40760F43-36C8-4021-97B9-BF3154434356
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 15:01:34.627557   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:34.627557   12872 main.go:141] libmachine: Writing magic tar header
	I0528 15:01:34.627557   12872 main.go:141] libmachine: Writing SSH key tar header
	I0528 15:01:34.637888   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 15:01:37.837419   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:37.838145   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:37.838145   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\disk.vhd' -SizeBytes 20000MB
	I0528 15:01:40.430597   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:40.431042   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:40.431175   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-517300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 15:01:44.126231   12872 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-517300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 15:01:44.126844   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:44.127066   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-517300 -DynamicMemoryEnabled $false
	I0528 15:01:46.393679   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:46.393873   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:46.393873   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-517300 -Count 2
	I0528 15:01:48.560469   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:48.560897   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:48.560897   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-517300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\boot2docker.iso'
	I0528 15:01:51.164534   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:51.167334   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:51.167700   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-517300 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\disk.vhd'
	I0528 15:01:53.816857   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:53.817107   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:53.817107   12872 main.go:141] libmachine: Starting VM...
	I0528 15:01:53.817327   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300
	I0528 15:01:56.919455   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:01:56.919455   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:56.919872   12872 main.go:141] libmachine: Waiting for host to start...
	I0528 15:01:56.919872   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:01:59.247620   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:01:59.248067   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:01:59.248165   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:01.813236   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:02:01.813725   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:02.825288   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:05.088943   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:05.090014   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:05.090014   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:07.683352   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:02:07.683352   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:08.685018   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:10.935336   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:10.935336   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:10.935882   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:13.512565   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:02:13.512565   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:14.521609   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:16.810085   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:16.810085   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:16.810676   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:19.411590   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:02:19.412578   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:20.419301   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:22.680603   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:22.681000   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:22.681097   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:25.335205   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:25.335293   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:25.335406   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:27.475777   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:27.476521   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:27.476584   12872 machine.go:94] provisionDockerMachine start ...
	I0528 15:02:27.476584   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:29.693981   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:29.694982   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:29.694982   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:32.356437   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:32.356497   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:32.362400   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:02:32.373456   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:02:32.373456   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:02:32.507222   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:02:32.507222   12872 buildroot.go:166] provisioning hostname "multinode-517300"
	I0528 15:02:32.507222   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:34.717950   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:34.718382   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:34.718382   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:37.354368   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:37.355056   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:37.360899   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:02:37.360899   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:02:37.360899   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300 && echo "multinode-517300" | sudo tee /etc/hostname
	I0528 15:02:37.523867   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300
	
	I0528 15:02:37.524046   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:39.654029   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:39.654558   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:39.654558   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:42.219863   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:42.220540   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:42.226078   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:02:42.226836   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:02:42.226836   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:02:42.374278   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:02:42.374278   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:02:42.374278   12872 buildroot.go:174] setting up certificates
	I0528 15:02:42.374278   12872 provision.go:84] configureAuth start
	I0528 15:02:42.374278   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:44.532216   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:44.532216   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:44.532216   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:47.032923   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:47.032923   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:47.033626   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:49.122536   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:49.122536   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:49.122536   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:51.665376   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:51.666317   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:51.666408   12872 provision.go:143] copyHostCerts
	I0528 15:02:51.666470   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:02:51.666470   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:02:51.666470   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:02:51.667337   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:02:51.668502   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:02:51.668962   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:02:51.668962   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:02:51.668962   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:02:51.670395   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:02:51.671199   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:02:51.671316   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:02:51.671725   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:02:51.672671   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300 san=[127.0.0.1 172.18.214.123 localhost minikube multinode-517300]
	I0528 15:02:51.904418   12872 provision.go:177] copyRemoteCerts
	I0528 15:02:51.915080   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:02:51.915080   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:54.086231   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:54.087262   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:54.087415   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:02:56.626318   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:02:56.626318   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:56.626749   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:02:56.727401   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8123082s)
	I0528 15:02:56.727523   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:02:56.728235   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:02:56.769580   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:02:56.770102   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0528 15:02:56.808824   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:02:56.809336   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 15:02:56.849203   12872 provision.go:87] duration metric: took 14.4748268s to configureAuth
	I0528 15:02:56.849253   12872 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:02:56.849592   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:02:56.849592   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:02:58.953795   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:02:58.953795   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:02:58.954453   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:01.474367   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:01.475377   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:01.481751   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:03:01.481751   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:03:01.481751   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:03:01.619597   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:03:01.619709   12872 buildroot.go:70] root file system type: tmpfs
	I0528 15:03:01.620022   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:03:01.620123   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:03.735234   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:03.735234   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:03.735383   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:06.236242   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:06.236242   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:06.241610   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:03:06.242303   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:03:06.242834   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:03:06.404798   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:03:06.404798   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:08.474404   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:08.474404   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:08.475304   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:11.049041   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:11.049041   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:11.053658   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:03:11.053658   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:03:11.053658   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:03:13.214493   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:03:13.214493   12872 machine.go:97] duration metric: took 45.7377859s to provisionDockerMachine
	I0528 15:03:13.214493   12872 client.go:171] duration metric: took 1m56.5752397s to LocalClient.Create
	I0528 15:03:13.214493   12872 start.go:167] duration metric: took 1m56.5752397s to libmachine.API.Create "multinode-517300"
	I0528 15:03:13.214493   12872 start.go:293] postStartSetup for "multinode-517300" (driver="hyperv")
	I0528 15:03:13.214493   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:03:13.229612   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:03:13.229612   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:15.293111   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:15.293898   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:15.294021   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:17.806593   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:17.807825   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:17.808150   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:03:17.911755   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6821304s)
	I0528 15:03:17.924854   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:03:17.931965   12872 command_runner.go:130] > NAME=Buildroot
	I0528 15:03:17.931965   12872 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:03:17.931965   12872 command_runner.go:130] > ID=buildroot
	I0528 15:03:17.931965   12872 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:03:17.931965   12872 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:03:17.931965   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:03:17.932486   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:03:17.932626   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:03:17.934035   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:03:17.934035   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:03:17.946845   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:03:17.965824   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:03:18.020226   12872 start.go:296] duration metric: took 4.8057203s for postStartSetup
	I0528 15:03:18.023799   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:20.177340   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:20.178467   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:20.178518   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:22.693962   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:22.694751   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:22.694829   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:03:22.697812   12872 start.go:128] duration metric: took 2m6.0639352s to createHost
	I0528 15:03:22.697812   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:24.874995   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:24.875315   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:24.875315   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:27.390561   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:27.390561   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:27.396350   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:03:27.396350   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:03:27.396926   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 15:03:27.538337   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716933807.547314501
	
	I0528 15:03:27.538337   12872 fix.go:216] guest clock: 1716933807.547314501
	I0528 15:03:27.538507   12872 fix.go:229] Guest: 2024-05-28 15:03:27.547314501 -0700 PDT Remote: 2024-05-28 15:03:22.6978128 -0700 PDT m=+131.618085101 (delta=4.849501701s)
	I0528 15:03:27.538707   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:29.639357   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:29.639357   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:29.640218   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:32.139672   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:32.140419   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:32.145304   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:03:32.146083   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.214.123 22 <nil> <nil>}
	I0528 15:03:32.146083   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716933807
	I0528 15:03:32.295313   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:03:27 UTC 2024
	
	I0528 15:03:32.295368   12872 fix.go:236] clock set: Tue May 28 22:03:27 UTC 2024
	 (err=<nil>)
	I0528 15:03:32.295368   12872 start.go:83] releasing machines lock for "multinode-517300", held for 2m15.6616172s
	I0528 15:03:32.295658   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:34.405796   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:34.405796   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:34.405914   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:36.891617   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:36.892649   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:36.897116   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:03:36.897321   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:36.907953   12872 ssh_runner.go:195] Run: cat /version.json
	I0528 15:03:36.907953   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:03:39.110978   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:39.111050   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:39.111050   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:03:39.111050   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:39.111050   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:39.111050   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:03:41.772881   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:41.773041   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:41.773208   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:03:41.797336   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:03:41.797336   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:03:41.798348   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:03:41.874849   12872 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 15:03:41.875234   12872 ssh_runner.go:235] Completed: cat /version.json: (4.9672679s)
	I0528 15:03:41.888873   12872 ssh_runner.go:195] Run: systemctl --version
	I0528 15:03:41.942997   12872 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:03:41.942997   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0458675s)
	I0528 15:03:41.942997   12872 command_runner.go:130] > systemd 252 (252)
	I0528 15:03:41.942997   12872 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 15:03:41.955535   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:03:41.964167   12872 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 15:03:41.964732   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:03:41.976032   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:03:42.003337   12872 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:03:42.003501   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:03:42.003501   12872 start.go:494] detecting cgroup driver to use...
	I0528 15:03:42.003585   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:03:42.040066   12872 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:03:42.052182   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:03:42.083466   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:03:42.101368   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:03:42.113099   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:03:42.141653   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:03:42.171014   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:03:42.202325   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:03:42.236532   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:03:42.269224   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:03:42.298597   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:03:42.329200   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:03:42.360483   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:03:42.374863   12872 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:03:42.388510   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:03:42.418304   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:03:42.609602   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:03:42.644401   12872 start.go:494] detecting cgroup driver to use...
	I0528 15:03:42.657827   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:03:42.679223   12872 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:03:42.679223   12872 command_runner.go:130] > [Unit]
	I0528 15:03:42.679223   12872 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:03:42.679223   12872 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:03:42.679223   12872 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:03:42.679325   12872 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:03:42.679325   12872 command_runner.go:130] > StartLimitBurst=3
	I0528 15:03:42.679325   12872 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:03:42.679325   12872 command_runner.go:130] > [Service]
	I0528 15:03:42.679325   12872 command_runner.go:130] > Type=notify
	I0528 15:03:42.679325   12872 command_runner.go:130] > Restart=on-failure
	I0528 15:03:42.679386   12872 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:03:42.679386   12872 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:03:42.679386   12872 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:03:42.679386   12872 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:03:42.679386   12872 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:03:42.679386   12872 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:03:42.679386   12872 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:03:42.679470   12872 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:03:42.679470   12872 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:03:42.679470   12872 command_runner.go:130] > ExecStart=
	I0528 15:03:42.679470   12872 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:03:42.679584   12872 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:03:42.679643   12872 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:03:42.679643   12872 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:03:42.679643   12872 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:03:42.679712   12872 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:03:42.679712   12872 command_runner.go:130] > LimitCORE=infinity
	I0528 15:03:42.679712   12872 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:03:42.679712   12872 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:03:42.679769   12872 command_runner.go:130] > TasksMax=infinity
	I0528 15:03:42.679769   12872 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:03:42.679769   12872 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:03:42.679769   12872 command_runner.go:130] > Delegate=yes
	I0528 15:03:42.679769   12872 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:03:42.679769   12872 command_runner.go:130] > KillMode=process
	I0528 15:03:42.679847   12872 command_runner.go:130] > [Install]
	I0528 15:03:42.679847   12872 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:03:42.691947   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:03:42.724031   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:03:42.773919   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:03:42.813091   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:03:42.846844   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:03:42.906902   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:03:42.929145   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:03:42.960966   12872 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:03:42.974059   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:03:42.978713   12872 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:03:42.990274   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:03:43.011098   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:03:43.058635   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:03:43.273833   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:03:43.457697   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:03:43.458070   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:03:43.504783   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:03:43.713579   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:03:46.204101   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4905154s)
	I0528 15:03:46.217223   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 15:03:46.253606   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:03:46.285326   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 15:03:46.483067   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 15:03:46.703543   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:03:46.907161   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 15:03:46.949943   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:03:46.982282   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:03:47.186061   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 15:03:47.298244   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 15:03:47.311316   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 15:03:47.319134   12872 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0528 15:03:47.319178   12872 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 15:03:47.319178   12872 command_runner.go:130] > Device: 0,22	Inode: 875         Links: 1
	I0528 15:03:47.319178   12872 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0528 15:03:47.319178   12872 command_runner.go:130] > Access: 2024-05-28 22:03:47.223379628 +0000
	I0528 15:03:47.319248   12872 command_runner.go:130] > Modify: 2024-05-28 22:03:47.223379628 +0000
	I0528 15:03:47.319248   12872 command_runner.go:130] > Change: 2024-05-28 22:03:47.227379638 +0000
	I0528 15:03:47.319248   12872 command_runner.go:130] >  Birth: -
	I0528 15:03:47.319309   12872 start.go:562] Will wait 60s for crictl version
	I0528 15:03:47.332347   12872 ssh_runner.go:195] Run: which crictl
	I0528 15:03:47.338430   12872 command_runner.go:130] > /usr/bin/crictl
	I0528 15:03:47.352580   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 15:03:47.412866   12872 command_runner.go:130] > Version:  0.1.0
	I0528 15:03:47.412866   12872 command_runner.go:130] > RuntimeName:  docker
	I0528 15:03:47.412866   12872 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0528 15:03:47.412866   12872 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 15:03:47.413015   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 15:03:47.423416   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:03:47.454552   12872 command_runner.go:130] > 26.0.2
	I0528 15:03:47.466389   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:03:47.497937   12872 command_runner.go:130] > 26.0.2
	I0528 15:03:47.502962   12872 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 15:03:47.503220   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 15:03:47.507629   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 15:03:47.507629   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 15:03:47.507629   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 15:03:47.507629   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 15:03:47.510959   12872 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 15:03:47.510959   12872 ip.go:210] interface addr: 172.18.208.1/20
	I0528 15:03:47.522517   12872 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 15:03:47.530922   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:03:47.559099   12872 kubeadm.go:877] updating cluster {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 15:03:47.559329   12872 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:03:47.568542   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:03:47.589306   12872 docker.go:685] Got preloaded images: 
	I0528 15:03:47.589391   12872 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0528 15:03:47.600400   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 15:03:47.616595   12872 command_runner.go:139] > {"Repositories":{}}
	I0528 15:03:47.629835   12872 ssh_runner.go:195] Run: which lz4
	I0528 15:03:47.635684   12872 command_runner.go:130] > /usr/bin/lz4
	I0528 15:03:47.635684   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0528 15:03:47.646790   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 15:03:47.653139   12872 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 15:03:47.653529   12872 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 15:03:47.655625   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0528 15:03:49.762829   12872 docker.go:649] duration metric: took 2.1261632s to copy over tarball
	I0528 15:03:49.776772   12872 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 15:03:58.411387   12872 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6345918s)
	I0528 15:03:58.411943   12872 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 15:03:58.485187   12872 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0528 15:03:58.502763   12872 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0528 15:03:58.502918   12872 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0528 15:03:58.545889   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:03:58.766067   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:04:01.730237   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.964068s)
	I0528 15:04:01.741474   12872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:04:01.764441   12872 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 15:04:01.764441   12872 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 15:04:01.765249   12872 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 15:04:01.765249   12872 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 15:04:01.765249   12872 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 15:04:01.765249   12872 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 15:04:01.765249   12872 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 15:04:01.765249   12872 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:04:01.765364   12872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 15:04:01.765412   12872 cache_images.go:84] Images are preloaded, skipping loading
	I0528 15:04:01.765487   12872 kubeadm.go:928] updating node { 172.18.214.123 8443 v1.30.1 docker true true} ...
	I0528 15:04:01.765676   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-517300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.214.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 15:04:01.774899   12872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 15:04:01.806293   12872 command_runner.go:130] > cgroupfs
	I0528 15:04:01.807419   12872 cni.go:84] Creating CNI manager for ""
	I0528 15:04:01.807419   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 15:04:01.807419   12872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 15:04:01.807419   12872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.214.123 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-517300 NodeName:multinode-517300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.214.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.214.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 15:04:01.807419   12872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.214.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-517300"
	  kubeletExtraArgs:
	    node-ip: 172.18.214.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.214.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 15:04:01.821854   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 15:04:01.837746   12872 command_runner.go:130] > kubeadm
	I0528 15:04:01.837746   12872 command_runner.go:130] > kubectl
	I0528 15:04:01.837746   12872 command_runner.go:130] > kubelet
	I0528 15:04:01.837746   12872 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 15:04:01.850702   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 15:04:01.867192   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0528 15:04:01.899587   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 15:04:01.928198   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0528 15:04:01.970298   12872 ssh_runner.go:195] Run: grep 172.18.214.123	control-plane.minikube.internal$ /etc/hosts
	I0528 15:04:01.976196   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.214.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:04:02.011261   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:04:02.212607   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:04:02.240156   12872 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300 for IP: 172.18.214.123
	I0528 15:04:02.240156   12872 certs.go:194] generating shared ca certs ...
	I0528 15:04:02.240258   12872 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:02.241055   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 15:04:02.241552   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 15:04:02.241672   12872 certs.go:256] generating profile certs ...
	I0528 15:04:02.241815   12872 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.key
	I0528 15:04:02.242483   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.crt with IP's: []
	I0528 15:04:02.600754   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.crt ...
	I0528 15:04:02.600754   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.crt: {Name:mkbb53d8252220e2d70ed8704759f7dcc564c9f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:02.602507   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.key ...
	I0528 15:04:02.602507   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.key: {Name:mk623be51a596919b7068658166646f999a91f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:02.603436   12872 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.9e670251
	I0528 15:04:02.604508   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.9e670251 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.214.123]
	I0528 15:04:02.821602   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.9e670251 ...
	I0528 15:04:02.821602   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.9e670251: {Name:mk4d0bcbed55acbbcf847f135dcea8722f11160b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:02.823624   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.9e670251 ...
	I0528 15:04:02.823624   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.9e670251: {Name:mk418f21a97341483e4a062a5b212e05c46c0d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:02.824146   12872 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.9e670251 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt
	I0528 15:04:02.836374   12872 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.9e670251 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key
	I0528 15:04:02.837414   12872 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key
	I0528 15:04:02.838478   12872 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt with IP's: []
	I0528 15:04:03.095755   12872 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt ...
	I0528 15:04:03.095755   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt: {Name:mk6e138f76ba17d73f3a4d7eac6396bd96a4bb4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:03.097567   12872 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key ...
	I0528 15:04:03.097567   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key: {Name:mk01d5abf6eb88271e2c116092903934f4c0c55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:03.098008   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 15:04:03.098881   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 15:04:03.098881   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 15:04:03.098881   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 15:04:03.099504   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 15:04:03.099661   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 15:04:03.099867   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 15:04:03.110526   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 15:04:03.110701   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 15:04:03.111347   12872 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 15:04:03.111537   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 15:04:03.111537   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 15:04:03.112097   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 15:04:03.112097   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 15:04:03.112891   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 15:04:03.112891   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:04:03.112891   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 15:04:03.113417   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 15:04:03.114654   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 15:04:03.156710   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 15:04:03.191318   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 15:04:03.235569   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 15:04:03.277484   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 15:04:03.321247   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 15:04:03.366290   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 15:04:03.411375   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 15:04:03.452677   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 15:04:03.496644   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 15:04:03.538829   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 15:04:03.581199   12872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 15:04:03.624930   12872 ssh_runner.go:195] Run: openssl version
	I0528 15:04:03.633883   12872 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 15:04:03.646725   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 15:04:03.677197   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:04:03.684849   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:04:03.684934   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:04:03.696947   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:04:03.704949   12872 command_runner.go:130] > b5213941
	I0528 15:04:03.718474   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 15:04:03.750686   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 15:04:03.781636   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 15:04:03.788996   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:04:03.788996   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:04:03.799523   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 15:04:03.809666   12872 command_runner.go:130] > 51391683
	I0528 15:04:03.821631   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 15:04:03.860842   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 15:04:03.891789   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 15:04:03.897811   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:04:03.897856   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:04:03.912184   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 15:04:03.920909   12872 command_runner.go:130] > 3ec20f2e
	I0528 15:04:03.933240   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 15:04:03.963158   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:04:03.968989   12872 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 15:04:03.969997   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 15:04:03.970348   12872 kubeadm.go:391] StartCluster: {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:04:03.979450   12872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 15:04:04.010035   12872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 15:04:04.026874   12872 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0528 15:04:04.026874   12872 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0528 15:04:04.026874   12872 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0528 15:04:04.038840   12872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 15:04:04.068254   12872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 15:04:04.085199   12872 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0528 15:04:04.085199   12872 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0528 15:04:04.085502   12872 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0528 15:04:04.085502   12872 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:04:04.085843   12872 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:04:04.085843   12872 kubeadm.go:156] found existing configuration files:
	
	I0528 15:04:04.097720   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 15:04:04.115745   12872 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:04:04.115813   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:04:04.127976   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 15:04:04.157815   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 15:04:04.173557   12872 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:04:04.174155   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:04:04.187020   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 15:04:04.217524   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 15:04:04.236664   12872 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:04:04.236664   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:04:04.248591   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 15:04:04.275708   12872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 15:04:04.290608   12872 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:04:04.291752   12872 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:04:04.303911   12872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 15:04:04.322666   12872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 15:04:04.722403   12872 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 15:04:04.722499   12872 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 15:04:18.645196   12872 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 15:04:18.645282   12872 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0528 15:04:18.645282   12872 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 15:04:18.645282   12872 command_runner.go:130] > [preflight] Running pre-flight checks
	I0528 15:04:18.645282   12872 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 15:04:18.645282   12872 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 15:04:18.645816   12872 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 15:04:18.645816   12872 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 15:04:18.645910   12872 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 15:04:18.645910   12872 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 15:04:18.645910   12872 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 15:04:18.645910   12872 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 15:04:18.650070   12872 out.go:204]   - Generating certificates and keys ...
	I0528 15:04:18.650367   12872 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 15:04:18.650422   12872 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0528 15:04:18.650516   12872 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0528 15:04:18.650516   12872 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 15:04:18.650686   12872 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 15:04:18.650786   12872 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 15:04:18.650871   12872 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0528 15:04:18.650963   12872 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 15:04:18.651162   12872 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 15:04:18.651162   12872 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0528 15:04:18.651319   12872 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 15:04:18.651319   12872 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0528 15:04:18.651492   12872 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 15:04:18.651492   12872 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0528 15:04:18.651597   12872 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-517300] and IPs [172.18.214.123 127.0.0.1 ::1]
	I0528 15:04:18.651597   12872 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-517300] and IPs [172.18.214.123 127.0.0.1 ::1]
	I0528 15:04:18.651597   12872 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 15:04:18.651597   12872 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0528 15:04:18.652184   12872 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-517300] and IPs [172.18.214.123 127.0.0.1 ::1]
	I0528 15:04:18.652184   12872 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-517300] and IPs [172.18.214.123 127.0.0.1 ::1]
	I0528 15:04:18.652342   12872 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 15:04:18.652434   12872 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 15:04:18.652520   12872 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 15:04:18.652520   12872 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 15:04:18.652636   12872 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 15:04:18.652755   12872 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0528 15:04:18.652917   12872 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 15:04:18.652917   12872 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 15:04:18.653005   12872 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 15:04:18.653083   12872 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 15:04:18.653125   12872 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 15:04:18.653125   12872 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 15:04:18.653125   12872 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 15:04:18.653125   12872 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 15:04:18.653125   12872 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 15:04:18.653125   12872 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 15:04:18.653693   12872 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 15:04:18.653693   12872 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 15:04:18.653876   12872 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 15:04:18.653876   12872 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 15:04:18.654121   12872 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 15:04:18.654121   12872 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 15:04:18.656668   12872 out.go:204]   - Booting up control plane ...
	I0528 15:04:18.656890   12872 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 15:04:18.656943   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 15:04:18.657053   12872 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 15:04:18.657053   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 15:04:18.657053   12872 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 15:04:18.657053   12872 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 15:04:18.657608   12872 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 15:04:18.657664   12872 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 15:04:18.657844   12872 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 15:04:18.657895   12872 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 15:04:18.658070   12872 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 15:04:18.658116   12872 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0528 15:04:18.658417   12872 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 15:04:18.658466   12872 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 15:04:18.658616   12872 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 15:04:18.658616   12872 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 15:04:18.658616   12872 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.578635ms
	I0528 15:04:18.658616   12872 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.578635ms
	I0528 15:04:18.658616   12872 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 15:04:18.658616   12872 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 15:04:18.659220   12872 command_runner.go:130] > [api-check] The API server is healthy after 7.001909078s
	I0528 15:04:18.659220   12872 kubeadm.go:309] [api-check] The API server is healthy after 7.001909078s
	I0528 15:04:18.659412   12872 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 15:04:18.659412   12872 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 15:04:18.659667   12872 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 15:04:18.659765   12872 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 15:04:18.659917   12872 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 15:04:18.659917   12872 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0528 15:04:18.660516   12872 command_runner.go:130] > [mark-control-plane] Marking the node multinode-517300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 15:04:18.660516   12872 kubeadm.go:309] [mark-control-plane] Marking the node multinode-517300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 15:04:18.660607   12872 command_runner.go:130] > [bootstrap-token] Using token: qgoe1u.bafy08ssdc4my17q
	I0528 15:04:18.660607   12872 kubeadm.go:309] [bootstrap-token] Using token: qgoe1u.bafy08ssdc4my17q
	I0528 15:04:18.663030   12872 out.go:204]   - Configuring RBAC rules ...
	I0528 15:04:18.663575   12872 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 15:04:18.663640   12872 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 15:04:18.663701   12872 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 15:04:18.663701   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 15:04:18.663701   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 15:04:18.663701   12872 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 15:04:18.664449   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 15:04:18.664449   12872 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 15:04:18.664449   12872 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 15:04:18.664449   12872 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 15:04:18.664449   12872 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 15:04:18.664449   12872 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 15:04:18.664449   12872 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 15:04:18.664449   12872 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 15:04:18.665326   12872 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0528 15:04:18.665326   12872 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 15:04:18.665326   12872 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0528 15:04:18.665326   12872 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 15:04:18.665326   12872 kubeadm.go:309] 
	I0528 15:04:18.665326   12872 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 15:04:18.665326   12872 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0528 15:04:18.665326   12872 kubeadm.go:309] 
	I0528 15:04:18.665326   12872 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0528 15:04:18.665326   12872 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 15:04:18.665326   12872 kubeadm.go:309] 
	I0528 15:04:18.665326   12872 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 15:04:18.665326   12872 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0528 15:04:18.665326   12872 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 15:04:18.665326   12872 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 15:04:18.666328   12872 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 15:04:18.666328   12872 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 15:04:18.666328   12872 kubeadm.go:309] 
	I0528 15:04:18.666328   12872 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0528 15:04:18.666328   12872 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 15:04:18.666328   12872 kubeadm.go:309] 
	I0528 15:04:18.666328   12872 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 15:04:18.666328   12872 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 15:04:18.666328   12872 kubeadm.go:309] 
	I0528 15:04:18.666328   12872 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0528 15:04:18.666328   12872 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 15:04:18.666861   12872 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 15:04:18.666927   12872 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 15:04:18.667055   12872 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 15:04:18.667055   12872 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 15:04:18.667055   12872 kubeadm.go:309] 
	I0528 15:04:18.667055   12872 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 15:04:18.667055   12872 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0528 15:04:18.667055   12872 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0528 15:04:18.667055   12872 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 15:04:18.667055   12872 kubeadm.go:309] 
	I0528 15:04:18.667055   12872 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token qgoe1u.bafy08ssdc4my17q \
	I0528 15:04:18.667055   12872 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qgoe1u.bafy08ssdc4my17q \
	I0528 15:04:18.667959   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea \
	I0528 15:04:18.667959   12872 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea \
	I0528 15:04:18.667959   12872 kubeadm.go:309] 	--control-plane 
	I0528 15:04:18.667959   12872 command_runner.go:130] > 	--control-plane 
	I0528 15:04:18.667959   12872 kubeadm.go:309] 
	I0528 15:04:18.667959   12872 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0528 15:04:18.667959   12872 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 15:04:18.667959   12872 kubeadm.go:309] 
	I0528 15:04:18.667959   12872 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qgoe1u.bafy08ssdc4my17q \
	I0528 15:04:18.667959   12872 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qgoe1u.bafy08ssdc4my17q \
	I0528 15:04:18.667959   12872 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 15:04:18.668938   12872 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 15:04:18.668938   12872 cni.go:84] Creating CNI manager for ""
	I0528 15:04:18.668938   12872 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 15:04:18.673193   12872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 15:04:18.687445   12872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 15:04:18.696361   12872 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0528 15:04:18.696361   12872 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0528 15:04:18.696361   12872 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0528 15:04:18.696361   12872 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 15:04:18.696912   12872 command_runner.go:130] > Access: 2024-05-28 22:02:21.838894400 +0000
	I0528 15:04:18.696912   12872 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0528 15:04:18.696912   12872 command_runner.go:130] > Change: 2024-05-28 15:02:12.833000000 +0000
	I0528 15:04:18.696912   12872 command_runner.go:130] >  Birth: -
	I0528 15:04:18.697121   12872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 15:04:18.697252   12872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 15:04:18.750012   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 15:04:19.463405   12872 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0528 15:04:19.463895   12872 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0528 15:04:19.463971   12872 command_runner.go:130] > serviceaccount/kindnet created
	I0528 15:04:19.463971   12872 command_runner.go:130] > daemonset.apps/kindnet created
	I0528 15:04:19.464037   12872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 15:04:19.478868   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-517300 minikube.k8s.io/updated_at=2024_05_28T15_04_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=multinode-517300 minikube.k8s.io/primary=true
	I0528 15:04:19.479684   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:19.499462   12872 command_runner.go:130] > -16
	I0528 15:04:19.499684   12872 ops.go:34] apiserver oom_adj: -16
	I0528 15:04:19.643105   12872 command_runner.go:130] > node/multinode-517300 labeled
	I0528 15:04:19.649722   12872 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0528 15:04:19.663793   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:19.795952   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:20.173893   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:20.303162   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:20.664964   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:20.779699   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:21.167419   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:21.274520   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:21.675087   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:21.788051   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:22.165553   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:22.270974   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:22.665916   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:22.785408   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:23.166377   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:23.289539   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:23.667383   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:23.800709   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:24.178071   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:24.292814   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:24.665732   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:24.785338   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:25.177468   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:25.292707   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:25.663602   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:25.766731   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:26.166733   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:26.272651   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:26.668480   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:26.774324   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:27.169844   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:27.276479   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:27.669869   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:27.781227   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:28.171783   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:28.288477   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:28.671268   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:28.778116   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:29.177395   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:29.276871   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:29.665494   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:29.779572   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:30.165761   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:30.267271   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:30.675039   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:30.782490   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:31.170363   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:31.276944   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:31.676188   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:31.800743   12872 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0528 15:04:32.166967   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 15:04:32.396185   12872 command_runner.go:130] > NAME      SECRETS   AGE
	I0528 15:04:32.396288   12872 command_runner.go:130] > default   0         1s
	I0528 15:04:32.396288   12872 kubeadm.go:1107] duration metric: took 12.9321637s to wait for elevateKubeSystemPrivileges
	W0528 15:04:32.396408   12872 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 15:04:32.396408   12872 kubeadm.go:393] duration metric: took 28.4259836s to StartCluster
	I0528 15:04:32.396527   12872 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:32.396527   12872 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:04:32.398854   12872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:04:32.400361   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 15:04:32.400361   12872 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 15:04:32.404208   12872 out.go:177] * Verifying Kubernetes components...
	I0528 15:04:32.400985   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:04:32.401351   12872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 15:04:32.409302   12872 addons.go:69] Setting storage-provisioner=true in profile "multinode-517300"
	I0528 15:04:32.409302   12872 addons.go:234] Setting addon storage-provisioner=true in "multinode-517300"
	I0528 15:04:32.409302   12872 addons.go:69] Setting default-storageclass=true in profile "multinode-517300"
	I0528 15:04:32.409302   12872 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:04:32.409848   12872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-517300"
	I0528 15:04:32.410615   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:04:32.410615   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:04:32.424572   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:04:32.637600   12872 command_runner.go:130] > apiVersion: v1
	I0528 15:04:32.637600   12872 command_runner.go:130] > data:
	I0528 15:04:32.637600   12872 command_runner.go:130] >   Corefile: |
	I0528 15:04:32.637600   12872 command_runner.go:130] >     .:53 {
	I0528 15:04:32.637727   12872 command_runner.go:130] >         errors
	I0528 15:04:32.637727   12872 command_runner.go:130] >         health {
	I0528 15:04:32.637727   12872 command_runner.go:130] >            lameduck 5s
	I0528 15:04:32.637727   12872 command_runner.go:130] >         }
	I0528 15:04:32.637727   12872 command_runner.go:130] >         ready
	I0528 15:04:32.637727   12872 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0528 15:04:32.637786   12872 command_runner.go:130] >            pods insecure
	I0528 15:04:32.637786   12872 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0528 15:04:32.637786   12872 command_runner.go:130] >            ttl 30
	I0528 15:04:32.637786   12872 command_runner.go:130] >         }
	I0528 15:04:32.637786   12872 command_runner.go:130] >         prometheus :9153
	I0528 15:04:32.637786   12872 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0528 15:04:32.637843   12872 command_runner.go:130] >            max_concurrent 1000
	I0528 15:04:32.637843   12872 command_runner.go:130] >         }
	I0528 15:04:32.637843   12872 command_runner.go:130] >         cache 30
	I0528 15:04:32.637843   12872 command_runner.go:130] >         loop
	I0528 15:04:32.637843   12872 command_runner.go:130] >         reload
	I0528 15:04:32.637843   12872 command_runner.go:130] >         loadbalance
	I0528 15:04:32.637843   12872 command_runner.go:130] >     }
	I0528 15:04:32.637843   12872 command_runner.go:130] > kind: ConfigMap
	I0528 15:04:32.637843   12872 command_runner.go:130] > metadata:
	I0528 15:04:32.637843   12872 command_runner.go:130] >   creationTimestamp: "2024-05-28T22:04:17Z"
	I0528 15:04:32.637843   12872 command_runner.go:130] >   name: coredns
	I0528 15:04:32.637843   12872 command_runner.go:130] >   namespace: kube-system
	I0528 15:04:32.637843   12872 command_runner.go:130] >   resourceVersion: "258"
	I0528 15:04:32.637843   12872 command_runner.go:130] >   uid: 3ae241dd-4f86-4ae0-8851-46c3645b147f
	I0528 15:04:32.649122   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.18.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 15:04:32.807930   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:04:33.327538   12872 command_runner.go:130] > configmap/coredns replaced
	I0528 15:04:33.327538   12872 start.go:946] {"host.minikube.internal": 172.18.208.1} host record injected into CoreDNS's ConfigMap
	I0528 15:04:33.328476   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:04:33.330217   12872 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.214.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:04:33.332127   12872 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 15:04:33.332127   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:04:33.332841   12872 node_ready.go:35] waiting up to 6m0s for node "multinode-517300" to be "Ready" ...
	I0528 15:04:33.333190   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:33.333190   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:33.333265   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:33.333347   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:33.333347   12872 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.214.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:04:33.334251   12872 round_trippers.go:463] GET https://172.18.214.123:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0528 15:04:33.334251   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:33.334251   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:33.334251   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:33.350923   12872 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0528 15:04:33.350987   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:33.350987   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:33 GMT
	I0528 15:04:33.350987   12872 round_trippers.go:580]     Audit-Id: 36bc9afc-f0b0-4be1-917e-70a0939a6215
	I0528 15:04:33.350987   12872 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0528 15:04:33.350987   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:33.351317   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:33.351378   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:33.351378   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:33.351317   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:33.351560   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:33.351602   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:33.351602   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:33.351602   12872 round_trippers.go:580]     Content-Length: 291
	I0528 15:04:33.351602   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:33 GMT
	I0528 15:04:33.351602   12872 round_trippers.go:580]     Audit-Id: 779a51ca-fef1-4d0e-b819-27f0f19d880e
	I0528 15:04:33.351602   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:33.351602   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:33.351602   12872 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"56570527-55fc-4e41-ae5b-42c3458780f5","resourceVersion":"389","creationTimestamp":"2024-05-28T22:04:18Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0528 15:04:33.352507   12872 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"56570527-55fc-4e41-ae5b-42c3458780f5","resourceVersion":"389","creationTimestamp":"2024-05-28T22:04:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0528 15:04:33.352507   12872 round_trippers.go:463] PUT https://172.18.214.123:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0528 15:04:33.352507   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:33.352507   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:33.352507   12872 round_trippers.go:473]     Content-Type: application/json
	I0528 15:04:33.352507   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:33.366429   12872 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0528 15:04:33.366429   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:33.366429   12872 round_trippers.go:580]     Content-Length: 291
	I0528 15:04:33.366429   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:33 GMT
	I0528 15:04:33.366429   12872 round_trippers.go:580]     Audit-Id: 8a112918-ae59-470c-9f3b-d75458d98710
	I0528 15:04:33.366429   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:33.366429   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:33.366429   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:33.366429   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:33.367748   12872 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"56570527-55fc-4e41-ae5b-42c3458780f5","resourceVersion":"395","creationTimestamp":"2024-05-28T22:04:18Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0528 15:04:33.837749   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:33.837813   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:33.837749   12872 round_trippers.go:463] GET https://172.18.214.123:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0528 15:04:33.837867   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:33.837867   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:33.837867   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:33.837942   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:33.837942   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:33.841385   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:33.842323   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:33.842323   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:33.842323   12872 round_trippers.go:580]     Content-Length: 291
	I0528 15:04:33.842323   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:33 GMT
	I0528 15:04:33.842323   12872 round_trippers.go:580]     Audit-Id: 6b146d07-2e73-4d17-8c40-468fdf1e1241
	I0528 15:04:33.842323   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:33.842323   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:33.842323   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:33.842323   12872 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"56570527-55fc-4e41-ae5b-42c3458780f5","resourceVersion":"406","creationTimestamp":"2024-05-28T22:04:18Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0528 15:04:33.842445   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:33.842445   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:33.842445   12872 round_trippers.go:580]     Audit-Id: 15fdaca0-0d79-4dc1-9e6d-37ac023b53d5
	I0528 15:04:33.842599   12872 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-517300" context rescaled to 1 replicas
	I0528 15:04:33.842599   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:33.842733   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:33.842733   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:33.842733   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:33.842733   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:33 GMT
	I0528 15:04:33.843089   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:34.348597   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:34.348597   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:34.348597   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:34.348597   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:34.352607   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:34.352607   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:34.352607   12872 round_trippers.go:580]     Audit-Id: 04847a74-d239-4f93-b08e-2a6f455a7538
	I0528 15:04:34.352607   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:34.352607   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:34.352607   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:34.352607   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:34.352607   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:34 GMT
	I0528 15:04:34.352607   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:34.803707   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:04:34.803707   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:34.807156   12872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:04:34.810375   12872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 15:04:34.810416   12872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 15:04:34.810493   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:04:34.842049   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:34.842121   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:34.842121   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:34.842121   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:34.846730   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:34.846730   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:34.846730   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:34 GMT
	I0528 15:04:34.846730   12872 round_trippers.go:580]     Audit-Id: 3df1fc2d-dc18-4546-8cc5-941ed240ad8c
	I0528 15:04:34.846730   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:34.846730   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:34.846730   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:34.846730   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:34.846730   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:34.954813   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:04:34.954813   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:34.955390   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:04:34.956747   12872 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.214.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:04:34.957201   12872 addons.go:234] Setting addon default-storageclass=true in "multinode-517300"
	I0528 15:04:34.957741   12872 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:04:34.958535   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:04:35.346382   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:35.346589   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:35.346589   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:35.346589   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:35.350122   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:35.350122   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:35.350122   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:35.350467   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:35 GMT
	I0528 15:04:35.350467   12872 round_trippers.go:580]     Audit-Id: e3eaa198-f5e8-473f-b373-52ce659bc490
	I0528 15:04:35.350467   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:35.350467   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:35.350467   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:35.351564   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:35.351564   12872 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:04:35.834477   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:35.834516   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:35.834516   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:35.834516   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:35.844377   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 15:04:35.844377   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:35.844377   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:35.844377   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:35 GMT
	I0528 15:04:35.844377   12872 round_trippers.go:580]     Audit-Id: f37feaa1-37f7-4ba8-8daf-cc9ecf75a7b8
	I0528 15:04:35.844377   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:35.844377   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:35.844377   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:35.844377   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:36.341601   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:36.341668   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:36.341668   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:36.341668   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:36.347321   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:04:36.347321   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:36.347321   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:36.347321   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:36.347321   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:36.347321   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:36.347321   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:36 GMT
	I0528 15:04:36.347321   12872 round_trippers.go:580]     Audit-Id: a2e9d86a-d840-41bc-b9f8-97671b40a9e9
	I0528 15:04:36.347321   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:36.848081   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:36.848186   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:36.848186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:36.848186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:36.851790   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:36.851790   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:36.851790   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:36 GMT
	I0528 15:04:36.852210   12872 round_trippers.go:580]     Audit-Id: 248abe77-5179-4f75-b9f0-def1728f5772
	I0528 15:04:36.852210   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:36.852210   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:36.852210   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:36.852210   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:36.852620   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:37.111609   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:04:37.111609   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:37.111609   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:04:37.256392   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:04:37.256485   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:37.256485   12872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 15:04:37.256485   12872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 15:04:37.256485   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:04:37.336631   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:37.336702   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:37.336702   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:37.336702   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:37.366570   12872 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0528 15:04:37.366570   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:37.366718   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:37.366718   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:37 GMT
	I0528 15:04:37.366718   12872 round_trippers.go:580]     Audit-Id: 3123789b-87f8-4d18-9754-b8243fa1b80d
	I0528 15:04:37.366718   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:37.366718   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:37.366718   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:37.367357   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:37.367799   12872 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:04:37.841859   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:37.841859   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:37.841859   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:37.841859   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:37.845887   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:37.846121   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:37.846121   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:37.846203   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:37 GMT
	I0528 15:04:37.846296   12872 round_trippers.go:580]     Audit-Id: f552eaa9-df5a-4b73-8664-a0375d3dab83
	I0528 15:04:37.846296   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:37.846381   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:37.846461   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:37.846778   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:38.347370   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:38.347458   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:38.347458   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:38.347458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:38.350444   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:38.350444   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:38.351397   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:38.351397   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:38 GMT
	I0528 15:04:38.351397   12872 round_trippers.go:580]     Audit-Id: 307295bf-a1cf-46c4-a659-96d181117554
	I0528 15:04:38.351452   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:38.351452   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:38.351452   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:38.352105   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:38.837676   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:38.837738   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:38.837738   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:38.837738   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:38.840776   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:38.841733   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:38.841733   12872 round_trippers.go:580]     Audit-Id: 2b7f84bd-a917-4b89-9b0b-83468317e921
	I0528 15:04:38.841733   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:38.841733   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:38.841733   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:38.841733   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:38.841865   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:38 GMT
	I0528 15:04:38.842583   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:39.346025   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:39.346025   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:39.346025   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:39.346025   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:39.349743   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:39.349743   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:39.349743   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:39 GMT
	I0528 15:04:39.349743   12872 round_trippers.go:580]     Audit-Id: 6117cc15-9c6b-43b3-a353-9f8ed8ce46c9
	I0528 15:04:39.350499   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:39.350499   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:39.350499   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:39.350499   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:39.350547   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:39.591815   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:04:39.592344   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:39.592423   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:04:39.837337   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:39.837337   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:39.837449   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:39.837449   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:39.840817   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:39.840817   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:39.840817   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:39.840817   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:39 GMT
	I0528 15:04:39.840817   12872 round_trippers.go:580]     Audit-Id: 24091c0f-b87d-4a2a-9f14-6bf1fcf60af7
	I0528 15:04:39.840817   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:39.841560   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:39.841560   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:39.841852   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:39.842550   12872 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:04:39.856718   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:04:39.856718   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:39.857455   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:04:39.999121   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 15:04:40.340094   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:40.340094   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:40.340094   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:40.340309   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:40.343631   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:40.343692   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:40.343692   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:40.343692   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:40.343692   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:40.343692   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:40 GMT
	I0528 15:04:40.343692   12872 round_trippers.go:580]     Audit-Id: 5f9087b1-475b-4c64-9a2e-15c088dc2979
	I0528 15:04:40.343692   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:40.344138   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:40.538999   12872 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0528 15:04:40.538999   12872 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0528 15:04:40.538999   12872 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0528 15:04:40.538999   12872 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0528 15:04:40.538999   12872 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0528 15:04:40.538999   12872 command_runner.go:130] > pod/storage-provisioner created
	I0528 15:04:40.847254   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:40.847254   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:40.847254   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:40.847254   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:40.850850   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:40.851083   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:40.851083   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:40.851083   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:40.851083   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:40.851083   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:40.851083   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:40 GMT
	I0528 15:04:40.851083   12872 round_trippers.go:580]     Audit-Id: 390bc5fc-73e5-4fc8-9212-712d05bbb733
	I0528 15:04:40.852084   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:41.338417   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:41.338417   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:41.338417   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:41.338417   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:41.342017   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:41.342017   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:41.342895   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:41 GMT
	I0528 15:04:41.342895   12872 round_trippers.go:580]     Audit-Id: d20f4d2e-2ef7-4145-9e46-eba66b22444f
	I0528 15:04:41.342895   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:41.342895   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:41.342895   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:41.342895   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:41.343169   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:41.847459   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:41.847459   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:41.847459   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:41.847459   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:41.852035   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:41.852331   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:41.852331   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:41.852331   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:41.852331   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:41.852331   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:41.852331   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:41 GMT
	I0528 15:04:41.852331   12872 round_trippers.go:580]     Audit-Id: 6277bcc5-a605-477f-a09e-a0433bdcf424
	I0528 15:04:41.852864   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:41.853148   12872 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:04:42.108155   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:04:42.109166   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:42.109375   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:04:42.254896   12872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 15:04:42.335111   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:42.335111   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.335111   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.335111   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.337772   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:42.338796   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.338796   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.338796   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.338796   12872 round_trippers.go:580]     Audit-Id: 157373c2-09dd-4370-9800-bec98cfcf727
	I0528 15:04:42.338796   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.338796   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.338796   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.338987   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"339","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0528 15:04:42.443554   12872 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0528 15:04:42.447575   12872 round_trippers.go:463] GET https://172.18.214.123:8443/apis/storage.k8s.io/v1/storageclasses
	I0528 15:04:42.447575   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.447575   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.447575   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.458547   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 15:04:42.459482   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.459482   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.459482   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.459482   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.459551   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.459551   12872 round_trippers.go:580]     Content-Length: 1273
	I0528 15:04:42.459636   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.459658   12872 round_trippers.go:580]     Audit-Id: 76395733-4d9c-4067-88af-20d3ab983f84
	I0528 15:04:42.459799   12872 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"standard","uid":"a34a4d00-c6d9-4cf2-9ecf-7f2e3c8d2af5","resourceVersion":"429","creationTimestamp":"2024-05-28T22:04:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T22:04:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0528 15:04:42.460519   12872 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a34a4d00-c6d9-4cf2-9ecf-7f2e3c8d2af5","resourceVersion":"429","creationTimestamp":"2024-05-28T22:04:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T22:04:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0528 15:04:42.460671   12872 round_trippers.go:463] PUT https://172.18.214.123:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 15:04:42.460671   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.460745   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.460745   12872 round_trippers.go:473]     Content-Type: application/json
	I0528 15:04:42.460745   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.471865   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0528 15:04:42.471865   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.471865   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.472628   12872 round_trippers.go:580]     Audit-Id: bdf73db2-27dd-46c0-84a6-dbcaf7b5e923
	I0528 15:04:42.472628   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.472628   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.472628   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.472673   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.472673   12872 round_trippers.go:580]     Content-Length: 1220
	I0528 15:04:42.472744   12872 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a34a4d00-c6d9-4cf2-9ecf-7f2e3c8d2af5","resourceVersion":"429","creationTimestamp":"2024-05-28T22:04:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-28T22:04:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0528 15:04:42.478175   12872 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 15:04:42.481867   12872 addons.go:510] duration metric: took 10.0804885s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 15:04:42.838184   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:42.838184   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.838184   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.838184   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.843101   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:42.843101   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.843192   12872 round_trippers.go:580]     Audit-Id: 21aa60f0-d27b-42e9-9008-54e5db6dca23
	I0528 15:04:42.843192   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.843192   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.843192   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.843192   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.843192   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.844282   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:42.844770   12872 node_ready.go:49] node "multinode-517300" has status "Ready":"True"
	I0528 15:04:42.844826   12872 node_ready.go:38] duration metric: took 9.5119037s for node "multinode-517300" to be "Ready" ...
	I0528 15:04:42.844826   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:04:42.845001   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:04:42.845055   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.845055   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.845108   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.856953   12872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0528 15:04:42.857089   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.857089   12872 round_trippers.go:580]     Audit-Id: 9c94931f-c9a7-4604-b998-c8560828976e
	I0528 15:04:42.857089   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.857089   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.857089   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.857089   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.857089   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.858786   12872 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"434","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56385 chars]
	I0528 15:04:42.864153   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:42.864153   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:04:42.864153   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.864153   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.864153   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.866765   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:42.866765   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.867461   12872 round_trippers.go:580]     Audit-Id: a21e448f-1027-40df-ac2a-a28db4c14faa
	I0528 15:04:42.867461   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.867461   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.867461   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.867461   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.867461   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.867745   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"434","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0528 15:04:42.867851   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:42.867851   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:42.867851   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:42.867851   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:42.870443   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:42.870443   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:42.870443   12872 round_trippers.go:580]     Audit-Id: 22d423e5-e2b4-4445-b3ff-9b065b42282f
	I0528 15:04:42.870443   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:42.870443   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:42.870443   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:42.870443   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:42.870443   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:42 GMT
	I0528 15:04:42.871583   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:43.378363   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:04:43.378363   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:43.378363   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:43.378363   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:43.386379   12872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:04:43.386379   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:43.386379   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:43.386379   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:43.386379   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:43 GMT
	I0528 15:04:43.386379   12872 round_trippers.go:580]     Audit-Id: c187bdf4-f81c-4369-babe-4dd987854a72
	I0528 15:04:43.386379   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:43.386379   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:43.387456   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"434","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0528 15:04:43.387576   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:43.388114   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:43.388114   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:43.388114   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:43.392243   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:43.392243   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:43.392243   12872 round_trippers.go:580]     Audit-Id: 55af865a-ddec-491b-8286-71635bf4b91b
	I0528 15:04:43.392340   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:43.392362   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:43.392362   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:43.392362   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:43.392362   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:43 GMT
	I0528 15:04:43.392557   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:43.871736   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:04:43.871736   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:43.871736   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:43.871736   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:43.874679   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:43.875239   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:43.875273   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:43 GMT
	I0528 15:04:43.875273   12872 round_trippers.go:580]     Audit-Id: b6d2ebdb-6bcb-4f08-856c-2178c3143315
	I0528 15:04:43.875273   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:43.875300   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:43.875300   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:43.875300   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:43.875945   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"434","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0528 15:04:43.876232   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:43.876232   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:43.876232   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:43.876232   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:43.880869   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:43.880869   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:43.880964   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:43.880964   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:43.880964   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:43 GMT
	I0528 15:04:43.880964   12872 round_trippers.go:580]     Audit-Id: 55a58573-ad8f-405c-999d-d0029b741775
	I0528 15:04:43.880964   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:43.880964   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:43.880964   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.372787   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:04:44.372787   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.372787   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.372787   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.376347   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:44.376347   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.376588   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.376588   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.376588   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.376588   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.376588   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.376588   12872 round_trippers.go:580]     Audit-Id: 69b2cbd7-e5e6-4e5f-a9c9-a17c76b81ea1
	I0528 15:04:44.376682   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"434","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0528 15:04:44.377777   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.377777   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.377777   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.377777   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.381028   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:44.381028   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.381028   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.381028   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.381028   12872 round_trippers.go:580]     Audit-Id: 8fc578e5-a9a0-4bf8-ad36-aefbf98e2a9c
	I0528 15:04:44.381028   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.381028   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.381028   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.381028   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.876251   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:04:44.876349   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.876349   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.876349   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.879762   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:44.879762   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.879762   12872 round_trippers.go:580]     Audit-Id: 89628343-566d-4b72-b6d0-0d606d3df2c6
	I0528 15:04:44.880266   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.880266   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.880266   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.880266   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.880266   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.881023   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"447","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0528 15:04:44.881459   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.881459   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.881459   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.881459   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.884069   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.884069   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.884069   12872 round_trippers.go:580]     Audit-Id: d2dae0ee-0a11-477b-825b-4098c511ad3c
	I0528 15:04:44.884069   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.884069   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.884069   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.884069   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.884069   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.885014   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.885613   12872 pod_ready.go:92] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:44.885613   12872 pod_ready.go:81] duration metric: took 2.0214546s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.885718   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.885804   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:04:44.885804   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.885804   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.885804   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.888634   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.888634   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.888769   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.888769   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.888769   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.888769   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.888769   12872 round_trippers.go:580]     Audit-Id: e1e1a330-171a-4905-9317-ba38bacf9a39
	I0528 15:04:44.888769   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.889077   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8849c8b3-7ff5-42a1-856e-ce9d64467dea","resourceVersion":"323","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.214.123:2379","kubernetes.io/config.hash":"566cca5e4c6424d0dbd4fbafe29461a9","kubernetes.io/config.mirror":"566cca5e4c6424d0dbd4fbafe29461a9","kubernetes.io/config.seen":"2024-05-28T22:04:18.078849689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0528 15:04:44.889429   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.889429   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.889429   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.889429   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.892013   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.892013   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.892013   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.892013   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.892013   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.892013   12872 round_trippers.go:580]     Audit-Id: 0c5aa80a-6d75-43f3-b936-eebfc035cf41
	I0528 15:04:44.892013   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.892013   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.892794   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.893738   12872 pod_ready.go:92] pod "etcd-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:44.893762   12872 pod_ready.go:81] duration metric: took 8.0442ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.893841   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.893966   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:04:44.894003   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.894029   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.894029   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.896746   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.896746   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.896746   12872 round_trippers.go:580]     Audit-Id: 1d35e6a7-4cc1-41d2-8831-2da0e0e288a9
	I0528 15:04:44.896746   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.896746   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.896746   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.896746   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.896746   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.896746   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"1b776372-ea40-408c-99bb-a0d05e620038","resourceVersion":"326","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.214.123:8443","kubernetes.io/config.hash":"52f10060875936c1839f4d75318e7457","kubernetes.io/config.mirror":"52f10060875936c1839f4d75318e7457","kubernetes.io/config.seen":"2024-05-28T22:04:18.078855589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0528 15:04:44.897769   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.897769   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.897769   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.897769   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.900670   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.900670   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.900670   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.900670   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.901215   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.901215   12872 round_trippers.go:580]     Audit-Id: 89476c02-d906-42d0-a3df-27df0219ba37
	I0528 15:04:44.901215   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.901215   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.901429   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.901607   12872 pod_ready.go:92] pod "kube-apiserver-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:44.901607   12872 pod_ready.go:81] duration metric: took 7.7663ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.901607   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.901607   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:04:44.901607   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.901607   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.901607   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.904385   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.904385   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.904385   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.905014   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.905014   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.905014   12872 round_trippers.go:580]     Audit-Id: 6c323d2f-f7a4-495c-8c6f-cb42d715296c
	I0528 15:04:44.905014   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.905014   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.905360   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"320","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0528 15:04:44.905714   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.905714   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.905714   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.905714   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.917800   12872 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0528 15:04:44.917800   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.917800   12872 round_trippers.go:580]     Audit-Id: 531c61f8-4f52-479c-a544-cc8287309a84
	I0528 15:04:44.917800   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.917902   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.917902   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.917902   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.917902   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.918111   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.918111   12872 pod_ready.go:92] pod "kube-controller-manager-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:44.918111   12872 pod_ready.go:81] duration metric: took 16.5034ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.918111   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.918111   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:04:44.918111   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.918111   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.918111   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.923501   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:04:44.923796   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.923796   12872 round_trippers.go:580]     Audit-Id: d0a1c7cd-8de2-479a-b82f-6ccee986c66d
	I0528 15:04:44.923796   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.923796   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.923796   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.923796   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.923796   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.925060   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"408","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0528 15:04:44.926022   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:44.926022   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:44.926022   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:44.926022   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:44.928581   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:04:44.928799   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:44.928799   12872 round_trippers.go:580]     Audit-Id: f0c02009-9fc8-4b44-893c-1f564467a955
	I0528 15:04:44.928799   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:44.928799   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:44.928799   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:44.928799   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:44.928799   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:44 GMT
	I0528 15:04:44.929345   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:44.929600   12872 pod_ready.go:92] pod "kube-proxy-hbbq2" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:44.929600   12872 pod_ready.go:81] duration metric: took 11.4897ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:44.929600   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:45.077689   12872 request.go:629] Waited for 147.9131ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:04:45.077804   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:04:45.078008   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.078008   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.078008   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.081639   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:45.082047   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.082047   12872 round_trippers.go:580]     Audit-Id: e847ea94-c1b8-4633-bf04-10e2f8a3b780
	I0528 15:04:45.082047   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.082047   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.082047   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.082187   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.082187   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.082345   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"318","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0528 15:04:45.279399   12872 request.go:629] Waited for 195.8157ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:45.279579   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:04:45.279579   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.279579   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.279579   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.283043   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:45.283691   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.283691   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.283691   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.283691   12872 round_trippers.go:580]     Audit-Id: dded8a13-c9fd-4ab3-b539-61da63e1f3bf
	I0528 15:04:45.283691   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.283691   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.283691   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.284040   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0528 15:04:45.285049   12872 pod_ready.go:92] pod "kube-scheduler-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:04:45.285049   12872 pod_ready.go:81] duration metric: took 355.4478ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:04:45.285164   12872 pod_ready.go:38] duration metric: took 2.4402284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:04:45.285164   12872 api_server.go:52] waiting for apiserver process to appear ...
	I0528 15:04:45.298833   12872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:04:45.327248   12872 command_runner.go:130] > 1999
	I0528 15:04:45.327248   12872 api_server.go:72] duration metric: took 12.9266545s to wait for apiserver process to appear ...
	I0528 15:04:45.327248   12872 api_server.go:88] waiting for apiserver healthz status ...
	I0528 15:04:45.327248   12872 api_server.go:253] Checking apiserver healthz at https://172.18.214.123:8443/healthz ...
	I0528 15:04:45.336507   12872 api_server.go:279] https://172.18.214.123:8443/healthz returned 200:
	ok
	I0528 15:04:45.336816   12872 round_trippers.go:463] GET https://172.18.214.123:8443/version
	I0528 15:04:45.336816   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.336816   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.336816   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.338153   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:04:45.338153   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.338153   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.338153   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.338153   12872 round_trippers.go:580]     Content-Length: 263
	I0528 15:04:45.338153   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.338153   12872 round_trippers.go:580]     Audit-Id: bc4309d5-40b7-4571-ba97-7a95dc09d203
	I0528 15:04:45.338153   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.338153   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.339220   12872 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 15:04:45.339372   12872 api_server.go:141] control plane version: v1.30.1
	I0528 15:04:45.339442   12872 api_server.go:131] duration metric: took 12.1934ms to wait for apiserver health ...
	I0528 15:04:45.339442   12872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 15:04:45.480401   12872 request.go:629] Waited for 140.883ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:04:45.480599   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:04:45.480599   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.480599   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.480664   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.488600   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:04:45.488600   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.488600   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.488600   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.488600   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.488600   12872 round_trippers.go:580]     Audit-Id: ad3200b4-29bc-4c10-a04f-03e947a72d18
	I0528 15:04:45.488600   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.488600   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.490541   12872 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"447","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0528 15:04:45.493485   12872 system_pods.go:59] 8 kube-system pods found
	I0528 15:04:45.493557   12872 system_pods.go:61] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "etcd-multinode-517300" [8849c8b3-7ff5-42a1-856e-ce9d64467dea] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "kube-apiserver-multinode-517300" [1b776372-ea40-408c-99bb-a0d05e620038] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:04:45.493557   12872 system_pods.go:61] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:04:45.493630   12872 system_pods.go:61] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running
	I0528 15:04:45.493630   12872 system_pods.go:74] duration metric: took 154.1882ms to wait for pod list to return data ...
	I0528 15:04:45.493685   12872 default_sa.go:34] waiting for default service account to be created ...
	I0528 15:04:45.684561   12872 request.go:629] Waited for 190.5325ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:04:45.684645   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:04:45.684645   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.684645   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.684723   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.689095   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:04:45.689095   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.689521   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.689521   12872 round_trippers.go:580]     Content-Length: 261
	I0528 15:04:45.689521   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.689521   12872 round_trippers.go:580]     Audit-Id: fc71e071-10ba-426e-82f5-10bf3cee9d03
	I0528 15:04:45.689521   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.689521   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.689521   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.689609   12872 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1a497e7b-5280-4f3b-b9dc-265476c8a394","resourceVersion":"364","creationTimestamp":"2024-05-28T22:04:31Z"}}]}
	I0528 15:04:45.689943   12872 default_sa.go:45] found service account: "default"
	I0528 15:04:45.690031   12872 default_sa.go:55] duration metric: took 196.3452ms for default service account to be created ...
	I0528 15:04:45.690031   12872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 15:04:45.886451   12872 request.go:629] Waited for 196.0996ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:04:45.886755   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:04:45.886755   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:45.886755   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:45.886853   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:45.894522   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:04:45.895721   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:45.895721   12872 round_trippers.go:580]     Audit-Id: d3f26f7b-154d-4222-8bad-2572bc904773
	I0528 15:04:45.895721   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:45.895721   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:45.895721   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:45.895721   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:45.895721   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:45 GMT
	I0528 15:04:45.897662   12872 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"447","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56499 chars]
	I0528 15:04:45.902648   12872 system_pods.go:86] 8 kube-system pods found
	I0528 15:04:45.902679   12872 system_pods.go:89] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "etcd-multinode-517300" [8849c8b3-7ff5-42a1-856e-ce9d64467dea] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "kube-apiserver-multinode-517300" [1b776372-ea40-408c-99bb-a0d05e620038] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:04:45.902704   12872 system_pods.go:89] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running
	I0528 15:04:45.902704   12872 system_pods.go:126] duration metric: took 212.6727ms to wait for k8s-apps to be running ...
	I0528 15:04:45.902704   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 15:04:45.912618   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:04:45.938261   12872 system_svc.go:56] duration metric: took 35.5569ms WaitForService to wait for kubelet
	I0528 15:04:45.938261   12872 kubeadm.go:576] duration metric: took 13.5376659s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:04:45.938359   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:04:46.089829   12872 request.go:629] Waited for 151.2982ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes
	I0528 15:04:46.090130   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes
	I0528 15:04:46.090130   12872 round_trippers.go:469] Request Headers:
	I0528 15:04:46.090130   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:04:46.090130   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:04:46.093727   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:04:46.093727   12872 round_trippers.go:577] Response Headers:
	I0528 15:04:46.094549   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:04:46.094549   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:04:46.094549   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:04:46.094549   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:04:46.094549   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:04:46 GMT
	I0528 15:04:46.094549   12872 round_trippers.go:580]     Audit-Id: 93e3d2c6-f676-43bb-a8ed-c951e28a3810
	I0528 15:04:46.094781   12872 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"428","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0528 15:04:46.094959   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:04:46.094959   12872 node_conditions.go:123] node cpu capacity is 2
	I0528 15:04:46.094959   12872 node_conditions.go:105] duration metric: took 156.5996ms to run NodePressure ...
	I0528 15:04:46.094959   12872 start.go:240] waiting for startup goroutines ...
	I0528 15:04:46.094959   12872 start.go:245] waiting for cluster config update ...
	I0528 15:04:46.095530   12872 start.go:254] writing updated cluster config ...
	I0528 15:04:46.101582   12872 out.go:177] 
	I0528 15:04:46.104662   12872 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:04:46.112282   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:04:46.113262   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:04:46.118102   12872 out.go:177] * Starting "multinode-517300-m02" worker node in "multinode-517300" cluster
	I0528 15:04:46.123604   12872 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:04:46.124198   12872 cache.go:56] Caching tarball of preloaded images
	I0528 15:04:46.124320   12872 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:04:46.124320   12872 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:04:46.124320   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:04:46.128771   12872 start.go:360] acquireMachinesLock for multinode-517300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:04:46.129783   12872 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-517300-m02"
	I0528 15:04:46.129783   12872 start.go:93] Provisioning new machine with config: &{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0528 15:04:46.129783   12872 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0528 15:04:46.133312   12872 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 15:04:46.134095   12872 start.go:159] libmachine.API.Create for "multinode-517300" (driver="hyperv")
	I0528 15:04:46.134095   12872 client.go:168] LocalClient.Create starting
	I0528 15:04:46.134575   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0528 15:04:46.135232   12872 main.go:141] libmachine: Decoding PEM data...
	I0528 15:04:46.135232   12872 main.go:141] libmachine: Parsing certificate...
	I0528 15:04:46.135232   12872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0528 15:04:46.136002   12872 main.go:141] libmachine: Decoding PEM data...
	I0528 15:04:46.136002   12872 main.go:141] libmachine: Parsing certificate...
	I0528 15:04:46.136002   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0528 15:04:48.075107   12872 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0528 15:04:48.075924   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:48.076017   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0528 15:04:49.844500   12872 main.go:141] libmachine: [stdout =====>] : False
	
	I0528 15:04:49.844500   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:49.844500   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 15:04:51.353842   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 15:04:51.354286   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:51.354401   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 15:04:55.119812   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 15:04:55.119899   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:55.122138   12872 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 15:04:55.612150   12872 main.go:141] libmachine: Creating SSH key...
	I0528 15:04:55.830095   12872 main.go:141] libmachine: Creating VM...
	I0528 15:04:55.831092   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0528 15:04:58.819789   12872 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0528 15:04:58.819789   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:04:58.819789   12872 main.go:141] libmachine: Using switch "Default Switch"
	I0528 15:04:58.819789   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0528 15:05:00.574235   12872 main.go:141] libmachine: [stdout =====>] : True
	
	I0528 15:05:00.574235   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:00.574337   12872 main.go:141] libmachine: Creating VHD
	I0528 15:05:00.574337   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0528 15:05:04.378485   12872 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 596D1501-2AA1-4372-B80B-7D3F5DD644D1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0528 15:05:04.378574   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:04.378648   12872 main.go:141] libmachine: Writing magic tar header
	I0528 15:05:04.378648   12872 main.go:141] libmachine: Writing SSH key tar header
	I0528 15:05:04.388062   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0528 15:05:07.601577   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:07.601804   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:07.601804   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\disk.vhd' -SizeBytes 20000MB
	I0528 15:05:10.218417   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:10.218417   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:10.218417   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-517300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0528 15:05:13.925121   12872 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-517300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0528 15:05:13.925951   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:13.926270   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-517300-m02 -DynamicMemoryEnabled $false
	I0528 15:05:16.219354   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:16.220151   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:16.220151   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-517300-m02 -Count 2
	I0528 15:05:18.459064   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:18.459064   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:18.459781   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-517300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\boot2docker.iso'
	I0528 15:05:21.111694   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:21.112577   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:21.112577   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-517300-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\disk.vhd'
	I0528 15:05:23.838317   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:23.838317   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:23.838317   12872 main.go:141] libmachine: Starting VM...
	I0528 15:05:23.838782   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300-m02
	I0528 15:05:26.911896   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:26.911896   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:26.911896   12872 main.go:141] libmachine: Waiting for host to start...
	I0528 15:05:26.911896   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:29.231366   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:29.231366   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:29.231508   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:05:31.773175   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:31.773175   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:32.787460   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:35.019498   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:35.030729   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:35.030837   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:05:37.639829   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:37.639829   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:38.651391   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:40.872176   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:40.872176   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:40.883907   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:05:43.416082   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:43.416082   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:44.430337   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:46.659315   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:46.659315   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:46.664969   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:05:49.148032   12872 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:05:49.148032   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:50.150263   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:52.363904   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:52.375293   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:52.375356   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:05:54.885376   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:05:54.885376   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:54.885376   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:56.990135   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:56.990135   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:56.990668   12872 machine.go:94] provisionDockerMachine start ...
	I0528 15:05:56.990987   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:05:59.192857   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:05:59.192857   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:05:59.193137   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:01.674774   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:01.674774   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:01.692278   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:01.704065   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:01.704065   12872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:06:01.829185   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:06:01.829185   12872 buildroot.go:166] provisioning hostname "multinode-517300-m02"
	I0528 15:06:01.829185   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:03.898415   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:03.899056   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:03.899056   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:06.369993   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:06.369993   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:06.387497   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:06.387796   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:06.387796   12872 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300-m02 && echo "multinode-517300-m02" | sudo tee /etc/hostname
	I0528 15:06:06.534997   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300-m02
	
	I0528 15:06:06.534997   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:08.635362   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:08.646158   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:08.646232   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:11.194898   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:11.206196   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:11.213008   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:11.213661   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:11.213661   12872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:06:11.350667   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:06:11.350667   12872 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:06:11.350667   12872 buildroot.go:174] setting up certificates
	I0528 15:06:11.350789   12872 provision.go:84] configureAuth start
	I0528 15:06:11.350789   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:13.489102   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:13.500674   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:13.500674   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:16.072735   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:16.084187   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:16.084378   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:18.223650   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:18.236117   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:18.236278   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:20.747468   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:20.747468   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:20.747779   12872 provision.go:143] copyHostCerts
	I0528 15:06:20.747926   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:06:20.748265   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:06:20.748339   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:06:20.748877   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:06:20.749972   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:06:20.750213   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:06:20.750213   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:06:20.750589   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:06:20.751488   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:06:20.751709   12872 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:06:20.751763   12872 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:06:20.751763   12872 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:06:20.753068   12872 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300-m02 san=[127.0.0.1 172.18.223.89 localhost minikube multinode-517300-m02]
	I0528 15:06:20.976971   12872 provision.go:177] copyRemoteCerts
	I0528 15:06:20.999049   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:06:20.999619   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:23.130340   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:23.130340   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:23.141950   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:25.642544   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:25.642544   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:25.654702   12872 sshutil.go:53] new ssh client: &{IP:172.18.223.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:06:25.755727   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7560953s)
	I0528 15:06:25.755727   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:06:25.755889   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 15:06:25.810655   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:06:25.810833   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:06:25.869246   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:06:25.869567   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0528 15:06:25.919665   12872 provision.go:87] duration metric: took 14.568838s to configureAuth
	I0528 15:06:25.919665   12872 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:06:25.920289   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:06:25.920492   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:28.011876   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:28.011876   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:28.023248   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:30.533565   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:30.533565   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:30.552489   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:30.553083   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:30.553083   12872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:06:30.674529   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:06:30.674529   12872 buildroot.go:70] root file system type: tmpfs
	I0528 15:06:30.675161   12872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:06:30.675241   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:32.779532   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:32.779589   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:32.779589   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:35.299767   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:35.299767   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:35.305504   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:35.306052   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:35.306179   12872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.214.123"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:06:35.449981   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.214.123
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:06:35.450136   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:37.560360   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:37.560360   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:37.560360   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:40.043775   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:40.055523   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:40.061417   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:40.062051   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:40.062051   12872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:06:42.186474   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:06:42.186539   12872 machine.go:97] duration metric: took 45.1957534s to provisionDockerMachine
	I0528 15:06:42.186601   12872 client.go:171] duration metric: took 1m56.0521427s to LocalClient.Create
	I0528 15:06:42.186601   12872 start.go:167] duration metric: took 1m56.052204s to libmachine.API.Create "multinode-517300"
	I0528 15:06:42.186660   12872 start.go:293] postStartSetup for "multinode-517300-m02" (driver="hyperv")
	I0528 15:06:42.186696   12872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:06:42.199835   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:06:42.199835   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:44.320792   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:44.320792   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:44.331751   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:46.807881   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:46.807881   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:46.808126   12872 sshutil.go:53] new ssh client: &{IP:172.18.223.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:06:46.916841   12872 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7169941s)
	I0528 15:06:46.929656   12872 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:06:46.936830   12872 command_runner.go:130] > NAME=Buildroot
	I0528 15:06:46.936830   12872 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:06:46.936830   12872 command_runner.go:130] > ID=buildroot
	I0528 15:06:46.936830   12872 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:06:46.936830   12872 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:06:46.937038   12872 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:06:46.937038   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:06:46.937458   12872 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:06:46.938388   12872 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:06:46.938388   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:06:46.950001   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:06:46.967188   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:06:47.011743   12872 start.go:296] duration metric: took 4.8250352s for postStartSetup
	I0528 15:06:47.015345   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:49.082948   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:49.082948   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:49.082948   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:51.562556   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:51.562556   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:51.573776   12872 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:06:51.576597   12872 start.go:128] duration metric: took 2m5.4463098s to createHost
	I0528 15:06:51.576713   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:53.664515   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:53.664515   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:53.675406   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:06:56.176954   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:06:56.176954   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:56.191022   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:06:56.191654   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:06:56.191685   12872 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 15:06:56.329507   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716934016.339582443
	
	I0528 15:06:56.329507   12872 fix.go:216] guest clock: 1716934016.339582443
	I0528 15:06:56.329507   12872 fix.go:229] Guest: 2024-05-28 15:06:56.339582443 -0700 PDT Remote: 2024-05-28 15:06:51.5765979 -0700 PDT m=+340.496319401 (delta=4.762984543s)
	I0528 15:06:56.329652   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:06:58.401583   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:06:58.401583   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:06:58.402582   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:07:00.901713   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:07:00.901713   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:00.919576   12872 main.go:141] libmachine: Using SSH client type: native
	I0528 15:07:00.920312   12872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.223.89 22 <nil> <nil>}
	I0528 15:07:00.920312   12872 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716934016
	I0528 15:07:01.053361   12872 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:06:56 UTC 2024
	
	I0528 15:07:01.053426   12872 fix.go:236] clock set: Tue May 28 22:06:56 UTC 2024
	 (err=<nil>)
	I0528 15:07:01.053426   12872 start.go:83] releasing machines lock for "multinode-517300-m02", held for 2m14.9232925s
	I0528 15:07:01.053460   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:07:03.164329   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:07:03.174890   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:03.175045   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:07:05.629883   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:07:05.642405   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:05.645260   12872 out.go:177] * Found network options:
	I0528 15:07:05.647879   12872 out.go:177]   - NO_PROXY=172.18.214.123
	W0528 15:07:05.650281   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:07:05.652408   12872 out.go:177]   - NO_PROXY=172.18.214.123
	W0528 15:07:05.654709   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 15:07:05.656471   12872 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:07:05.658305   12872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:07:05.658305   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:07:05.666961   12872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:07:05.666961   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:07:07.835545   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:07:07.849119   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:07.835545   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:07:07.849119   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:07:07.849119   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:07.849281   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:07:10.470274   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:07:10.470274   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:10.470688   12872 sshutil.go:53] new ssh client: &{IP:172.18.223.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:07:10.498472   12872 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:07:10.498472   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:10.499157   12872 sshutil.go:53] new ssh client: &{IP:172.18.223.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:07:10.794014   12872 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:07:10.794103   12872 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0528 15:07:10.794103   12872 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1271287s)
	I0528 15:07:10.794204   12872 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1357843s)
	W0528 15:07:10.794204   12872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:07:10.807084   12872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:07:10.830559   12872 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:07:10.836028   12872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:07:10.836028   12872 start.go:494] detecting cgroup driver to use...
	I0528 15:07:10.836028   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:07:10.877809   12872 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:07:10.888090   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:07:10.923292   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:07:10.941898   12872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:07:10.952725   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:07:10.985320   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:07:11.020204   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:07:11.058282   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:07:11.090844   12872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:07:11.126888   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:07:11.157863   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:07:11.192160   12872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:07:11.223310   12872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:07:11.242765   12872 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:07:11.258420   12872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:07:11.291443   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:11.483218   12872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:07:11.515950   12872 start.go:494] detecting cgroup driver to use...
	I0528 15:07:11.528463   12872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:07:11.559335   12872 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:07:11.559335   12872 command_runner.go:130] > [Unit]
	I0528 15:07:11.559335   12872 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:07:11.559335   12872 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:07:11.559335   12872 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:07:11.559335   12872 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:07:11.559335   12872 command_runner.go:130] > StartLimitBurst=3
	I0528 15:07:11.559335   12872 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:07:11.559335   12872 command_runner.go:130] > [Service]
	I0528 15:07:11.559335   12872 command_runner.go:130] > Type=notify
	I0528 15:07:11.559335   12872 command_runner.go:130] > Restart=on-failure
	I0528 15:07:11.559335   12872 command_runner.go:130] > Environment=NO_PROXY=172.18.214.123
	I0528 15:07:11.559335   12872 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:07:11.559335   12872 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:07:11.559335   12872 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:07:11.559335   12872 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:07:11.559335   12872 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:07:11.559335   12872 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:07:11.559335   12872 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:07:11.559335   12872 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:07:11.559335   12872 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:07:11.559335   12872 command_runner.go:130] > ExecStart=
	I0528 15:07:11.559869   12872 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:07:11.559869   12872 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:07:11.559869   12872 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:07:11.560190   12872 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:07:11.560190   12872 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:07:11.560190   12872 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:07:11.560190   12872 command_runner.go:130] > LimitCORE=infinity
	I0528 15:07:11.560190   12872 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:07:11.560190   12872 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:07:11.560190   12872 command_runner.go:130] > TasksMax=infinity
	I0528 15:07:11.560190   12872 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:07:11.560190   12872 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:07:11.560190   12872 command_runner.go:130] > Delegate=yes
	I0528 15:07:11.560190   12872 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:07:11.560190   12872 command_runner.go:130] > KillMode=process
	I0528 15:07:11.560190   12872 command_runner.go:130] > [Install]
	I0528 15:07:11.560190   12872 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:07:11.577113   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:07:11.608721   12872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:07:11.658206   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:07:11.701167   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:07:11.735077   12872 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:07:11.796974   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:07:11.822295   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:07:11.854981   12872 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:07:11.868922   12872 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:07:11.872233   12872 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:07:11.888921   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:07:11.906365   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:07:11.947323   12872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:07:12.140472   12872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:07:12.311126   12872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:07:12.311126   12872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:07:12.356451   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:12.537968   12872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:07:15.055153   12872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5042994s)
	I0528 15:07:15.068867   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 15:07:15.102433   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:07:15.137654   12872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 15:07:15.324763   12872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 15:07:15.514271   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:15.701207   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 15:07:15.743483   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:07:15.778097   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:15.984119   12872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 15:07:16.091724   12872 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 15:07:16.106549   12872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 15:07:16.112410   12872 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0528 15:07:16.112410   12872 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 15:07:16.112410   12872 command_runner.go:130] > Device: 0,22	Inode: 886         Links: 1
	I0528 15:07:16.112410   12872 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0528 15:07:16.112410   12872 command_runner.go:130] > Access: 2024-05-28 22:07:16.019420820 +0000
	I0528 15:07:16.112410   12872 command_runner.go:130] > Modify: 2024-05-28 22:07:16.019420820 +0000
	I0528 15:07:16.112410   12872 command_runner.go:130] > Change: 2024-05-28 22:07:16.022420822 +0000
	I0528 15:07:16.112410   12872 command_runner.go:130] >  Birth: -
	I0528 15:07:16.115942   12872 start.go:562] Will wait 60s for crictl version
	I0528 15:07:16.129382   12872 ssh_runner.go:195] Run: which crictl
	I0528 15:07:16.134613   12872 command_runner.go:130] > /usr/bin/crictl
	I0528 15:07:16.149328   12872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 15:07:16.206176   12872 command_runner.go:130] > Version:  0.1.0
	I0528 15:07:16.206176   12872 command_runner.go:130] > RuntimeName:  docker
	I0528 15:07:16.206176   12872 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0528 15:07:16.206176   12872 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 15:07:16.206176   12872 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 15:07:16.216283   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:07:16.243233   12872 command_runner.go:130] > 26.0.2
	I0528 15:07:16.253968   12872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:07:16.282673   12872 command_runner.go:130] > 26.0.2
	I0528 15:07:16.287291   12872 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 15:07:16.289736   12872 out.go:177]   - env NO_PROXY=172.18.214.123
	I0528 15:07:16.292801   12872 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 15:07:16.297759   12872 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 15:07:16.297759   12872 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 15:07:16.297759   12872 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 15:07:16.297759   12872 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 15:07:16.301279   12872 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 15:07:16.301327   12872 ip.go:210] interface addr: 172.18.208.1/20
	I0528 15:07:16.314865   12872 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 15:07:16.317457   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:07:16.340192   12872 mustload.go:65] Loading cluster: multinode-517300
	I0528 15:07:16.340824   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:07:16.341623   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:07:18.384314   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:07:18.384314   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:18.395569   12872 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:07:18.396308   12872 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300 for IP: 172.18.223.89
	I0528 15:07:18.396399   12872 certs.go:194] generating shared ca certs ...
	I0528 15:07:18.396399   12872 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:07:18.396979   12872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 15:07:18.397405   12872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 15:07:18.397907   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 15:07:18.397907   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 15:07:18.398492   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 15:07:18.398725   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 15:07:18.399300   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 15:07:18.399628   12872 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 15:07:18.399628   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 15:07:18.399628   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 15:07:18.400376   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 15:07:18.400716   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 15:07:18.401228   12872 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 15:07:18.401417   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 15:07:18.401417   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 15:07:18.401417   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:07:18.402012   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 15:07:18.446734   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 15:07:18.492081   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 15:07:18.542683   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 15:07:18.585959   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 15:07:18.626770   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 15:07:18.672305   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 15:07:18.726991   12872 ssh_runner.go:195] Run: openssl version
	I0528 15:07:18.730869   12872 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 15:07:18.747150   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 15:07:18.778207   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:07:18.781378   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:07:18.781378   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:07:18.787172   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:07:18.799179   12872 command_runner.go:130] > b5213941
	I0528 15:07:18.816209   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 15:07:18.849660   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 15:07:18.882694   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 15:07:18.886745   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:07:18.891628   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:07:18.904060   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 15:07:18.908307   12872 command_runner.go:130] > 51391683
	I0528 15:07:18.922471   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 15:07:18.953545   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 15:07:18.984285   12872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 15:07:18.991161   12872 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:07:18.991277   12872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:07:19.002989   12872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 15:07:19.012553   12872 command_runner.go:130] > 3ec20f2e
	I0528 15:07:19.024887   12872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 15:07:19.056585   12872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:07:19.060199   12872 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 15:07:19.065451   12872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 15:07:19.065678   12872 kubeadm.go:928] updating node {m02 172.18.223.89 8443 v1.30.1 docker false true} ...
	I0528 15:07:19.065678   12872 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-517300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.223.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 15:07:19.078122   12872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 15:07:19.095567   12872 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0528 15:07:19.096455   12872 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 15:07:19.108454   12872 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 15:07:19.128124   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 15:07:19.128124   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0528 15:07:19.128124   12872 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0528 15:07:19.128198   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 15:07:19.128198   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 15:07:19.142557   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 15:07:19.142557   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 15:07:19.145832   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:07:19.151515   12872 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 15:07:19.151515   12872 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 15:07:19.151515   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 15:07:19.151515   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 15:07:19.151515   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 15:07:19.151515   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 15:07:19.187263   12872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 15:07:19.200042   12872 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 15:07:19.297992   12872 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 15:07:19.298081   12872 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 15:07:19.298264   12872 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 15:07:20.577620   12872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0528 15:07:20.596762   12872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0528 15:07:20.629117   12872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 15:07:20.672382   12872 ssh_runner.go:195] Run: grep 172.18.214.123	control-plane.minikube.internal$ /etc/hosts
	I0528 15:07:20.677611   12872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.214.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:07:20.709791   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:20.918418   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:07:20.950936   12872 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:07:20.951811   12872 start.go:316] joinCluster: &{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:07:20.951979   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 15:07:20.952061   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:07:23.103250   12872 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:07:23.114155   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:23.114155   12872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:07:25.754111   12872 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:07:25.754111   12872 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:07:25.755146   12872 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:07:25.957017   12872 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token yxs0s7.5h527sjfx23lw0ht --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea 
	I0528 15:07:25.957206   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0051166s)
	I0528 15:07:25.957206   12872 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0528 15:07:25.957395   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxs0s7.5h527sjfx23lw0ht --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-517300-m02"
	I0528 15:07:26.178248   12872 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 15:07:27.507791   12872 command_runner.go:130] > [preflight] Running pre-flight checks
	I0528 15:07:27.507791   12872 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0528 15:07:27.507791   12872 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001556827s
	I0528 15:07:27.507914   12872 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0528 15:07:27.507914   12872 command_runner.go:130] > This node has joined the cluster:
	I0528 15:07:27.508011   12872 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0528 15:07:27.508011   12872 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0528 15:07:27.508011   12872 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0528 15:07:27.508011   12872 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxs0s7.5h527sjfx23lw0ht --discovery-token-ca-cert-hash sha256:539ff17077d9a18fe6ace348cbfbe22affc6c6390f84ec7c9294c19b6ec125ea --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-517300-m02": (1.5506123s)
	I0528 15:07:27.508171   12872 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 15:07:27.744573   12872 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0528 15:07:27.943214   12872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-517300-m02 minikube.k8s.io/updated_at=2024_05_28T15_07_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=multinode-517300 minikube.k8s.io/primary=false
	I0528 15:07:28.048945   12872 command_runner.go:130] > node/multinode-517300-m02 labeled
	I0528 15:07:28.048981   12872 start.go:318] duration metric: took 7.097151s to joinCluster
	I0528 15:07:28.048981   12872 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0528 15:07:28.052134   12872 out.go:177] * Verifying Kubernetes components...
	I0528 15:07:28.049696   12872 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:07:28.066377   12872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:07:28.251649   12872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:07:28.279671   12872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:07:28.280419   12872 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.214.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:07:28.281369   12872 node_ready.go:35] waiting up to 6m0s for node "multinode-517300-m02" to be "Ready" ...
	I0528 15:07:28.281570   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:28.281646   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:28.281646   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:28.281646   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:28.294419   12872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 15:07:28.294419   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:28.294419   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:28.294419   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:28.294419   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:28.294419   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:28.294419   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:28.294419   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:28 GMT
	I0528 15:07:28.294419   12872 round_trippers.go:580]     Audit-Id: 06c99c6b-7da7-4473-b1e5-681535db960b
	I0528 15:07:28.294419   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:28.782847   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:28.782928   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:28.782928   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:28.782928   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:28.783761   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:28.783761   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:28.786331   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:28.786331   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:28.786331   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:28.786331   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:28 GMT
	I0528 15:07:28.786331   12872 round_trippers.go:580]     Audit-Id: dc82b6ff-2f37-4f3a-95d2-eec6c27ceac9
	I0528 15:07:28.786331   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:28.786331   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:28.786331   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:29.292901   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:29.292901   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:29.292901   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:29.293096   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:29.293459   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:29.293459   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:29.293459   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:29.293459   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:29.293459   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:29.297986   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:29 GMT
	I0528 15:07:29.297986   12872 round_trippers.go:580]     Audit-Id: bdfe7a2d-2004-4a66-bcc3-df4f8de90616
	I0528 15:07:29.297986   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:29.297986   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:29.298068   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:29.783859   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:29.783859   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:29.783859   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:29.783957   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:29.785669   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:29.785669   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:29.786986   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:29.786986   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:29.786986   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:29 GMT
	I0528 15:07:29.786986   12872 round_trippers.go:580]     Audit-Id: 1a61e569-04bc-4d89-868a-4f469a5339c4
	I0528 15:07:29.786986   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:29.786986   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:29.786986   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:29.787071   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:30.292898   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:30.292898   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:30.292898   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:30.292898   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:30.293495   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:30.293495   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:30.296760   12872 round_trippers.go:580]     Audit-Id: 44302176-ae62-4805-bdc8-e53254647421
	I0528 15:07:30.296760   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:30.296760   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:30.296760   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:30.296977   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:30.296977   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:30.296977   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:30 GMT
	I0528 15:07:30.296977   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:30.297779   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:30.787367   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:30.787367   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:30.787367   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:30.787367   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:30.788191   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:30.788191   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:30.788191   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:30.788191   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:30.788191   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:30.788191   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:30.788191   12872 round_trippers.go:580]     Content-Length: 3920
	I0528 15:07:30.788191   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:30 GMT
	I0528 15:07:30.788191   12872 round_trippers.go:580]     Audit-Id: 60327038-9123-4e07-9f42-304c4df075ab
	I0528 15:07:30.791970   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"613","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0528 15:07:31.290921   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:31.290921   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:31.291156   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:31.291156   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:31.466025   12872 round_trippers.go:574] Response Status: 200 OK in 174 milliseconds
	I0528 15:07:31.466025   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:31.466025   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:31.466138   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:31 GMT
	I0528 15:07:31.466138   12872 round_trippers.go:580]     Audit-Id: d434d021-de5c-4d30-b7f8-3116fae33eeb
	I0528 15:07:31.466138   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:31.466138   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:31.466138   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:31.466180   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:31.466180   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:31.792795   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:31.792964   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:31.792964   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:31.792964   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:31.796665   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:07:31.796741   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:31.796741   12872 round_trippers.go:580]     Audit-Id: d06dd2ba-03a3-48eb-bbbf-c2c0f7cb9d5c
	I0528 15:07:31.796741   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:31.796741   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:31.796741   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:31.796741   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:31.796741   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:31.796741   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:31 GMT
	I0528 15:07:31.796831   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:32.292543   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:32.292543   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:32.292543   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:32.292543   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:32.293008   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:32.296680   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:32.296680   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:32.296680   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:32.296680   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:32.296680   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:32 GMT
	I0528 15:07:32.296680   12872 round_trippers.go:580]     Audit-Id: 9b75dd0c-2466-46ca-97f6-5c9c2f195bf7
	I0528 15:07:32.296680   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:32.296745   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:32.296894   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:32.792180   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:32.792486   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:32.792486   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:32.792563   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:32.798802   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:07:32.798802   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:32.798802   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:32.798802   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:32.798802   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:32.798802   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:32.798802   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:32.798802   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:32 GMT
	I0528 15:07:32.798802   12872 round_trippers.go:580]     Audit-Id: f2e35789-4550-427e-a7a1-b9c296d52c41
	I0528 15:07:32.798802   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:32.799717   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:33.286256   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:33.286256   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:33.286256   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:33.286256   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:33.286771   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:33.286771   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:33.286771   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:33.286771   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:33.290447   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:33.290447   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:33.290447   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:33 GMT
	I0528 15:07:33.290447   12872 round_trippers.go:580]     Audit-Id: dca93b83-03e8-4770-b628-cd1b63d4fc47
	I0528 15:07:33.290447   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:33.290731   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:33.792665   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:33.792665   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:33.792731   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:33.792731   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:33.795020   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:07:33.795738   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:33.795738   12872 round_trippers.go:580]     Audit-Id: 184641f3-3078-498f-8a62-852dfc933388
	I0528 15:07:33.795810   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:33.795810   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:33.795810   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:33.795810   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:33.795810   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:33.795810   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:33 GMT
	I0528 15:07:33.795971   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:34.289277   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:34.289386   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:34.289386   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:34.289386   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:34.289636   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:34.289636   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:34.289636   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:34.289636   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:34.289636   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:34.289636   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:34 GMT
	I0528 15:07:34.293795   12872 round_trippers.go:580]     Audit-Id: c2053fee-2d4a-453d-a49c-b94fdffbd23c
	I0528 15:07:34.293795   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:34.293795   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:34.293935   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:34.796350   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:34.796449   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:34.796449   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:34.796449   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:34.797240   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:34.797240   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:34.800859   12872 round_trippers.go:580]     Audit-Id: 130bb298-9727-4066-ae07-8b5c13320238
	I0528 15:07:34.800859   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:34.800859   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:34.800859   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:34.800859   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:34.800859   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:34.800859   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:34 GMT
	I0528 15:07:34.801073   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:34.801474   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:35.282773   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:35.282773   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:35.282773   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:35.282885   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:35.283672   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:35.283672   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:35.283672   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:35 GMT
	I0528 15:07:35.283672   12872 round_trippers.go:580]     Audit-Id: 5e626397-618b-4f37-88c8-5dd6dccb2144
	I0528 15:07:35.283672   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:35.287269   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:35.287269   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:35.287269   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:35.287269   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:35.287320   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:35.790079   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:35.790079   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:35.790079   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:35.790079   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:35.793648   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:07:35.793648   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:35.793648   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:35.793648   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:35.793648   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:35.793648   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:35.793648   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:35.793648   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:35 GMT
	I0528 15:07:35.793896   12872 round_trippers.go:580]     Audit-Id: 53218546-d2fd-4fdb-b34e-0805ebcbff27
	I0528 15:07:35.794012   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:36.298957   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:36.299044   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:36.299044   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:36.299044   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:36.305003   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:07:36.305003   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:36.305003   12872 round_trippers.go:580]     Audit-Id: a5aa6014-8754-46c0-888a-e115f22660c3
	I0528 15:07:36.305003   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:36.305003   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:36.305003   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:36.305003   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:36.305003   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:36.305003   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:36 GMT
	I0528 15:07:36.305003   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:36.792672   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:36.792857   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:36.792931   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:36.792931   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:36.796743   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:07:36.797038   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:36.797038   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:36.797038   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:36 GMT
	I0528 15:07:36.797038   12872 round_trippers.go:580]     Audit-Id: c0f73857-5497-4bf1-b0b2-47f953c7cbf6
	I0528 15:07:36.797038   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:36.797038   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:36.797038   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:36.797038   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:36.797176   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:37.281809   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:37.281809   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:37.281809   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:37.281809   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:37.282952   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:37.282952   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:37.282952   12872 round_trippers.go:580]     Audit-Id: 0afd03c8-007c-4138-bf7c-87f5fcf712e0
	I0528 15:07:37.282952   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:37.282952   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:37.282952   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:37.282952   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:37.282952   12872 round_trippers.go:580]     Content-Length: 4029
	I0528 15:07:37.282952   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:37 GMT
	I0528 15:07:37.282952   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"620","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3005 chars]
	I0528 15:07:37.285873   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:37.787291   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:37.787370   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:37.787370   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:37.787370   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:37.788197   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:37.791985   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:37.791985   12872 round_trippers.go:580]     Audit-Id: 885e7a81-cd24-4ef7-9e46-97b338cc5374
	I0528 15:07:37.791985   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:37.791985   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:37.791985   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:37.791985   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:37.791985   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:37 GMT
	I0528 15:07:37.792261   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:38.289587   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:38.289750   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:38.289750   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:38.289750   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:38.291410   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:38.291410   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:38.291410   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:38 GMT
	I0528 15:07:38.291410   12872 round_trippers.go:580]     Audit-Id: b47ddbfa-0eba-46aa-95fc-4a98bac81dab
	I0528 15:07:38.291410   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:38.291410   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:38.291410   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:38.291410   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:38.294465   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:38.784106   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:38.784173   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:38.784173   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:38.784206   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:38.785552   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:38.787638   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:38.787638   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:38.787638   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:38.787638   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:38.787710   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:38.787710   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:38 GMT
	I0528 15:07:38.787710   12872 round_trippers.go:580]     Audit-Id: 8367843f-fc40-48e3-b558-c0e914616b2d
	I0528 15:07:38.787905   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:39.290463   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:39.290463   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:39.290463   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:39.290463   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:39.292160   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:39.295884   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:39.295884   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:39 GMT
	I0528 15:07:39.295884   12872 round_trippers.go:580]     Audit-Id: 852328b6-3f52-428c-8071-668efb3a288f
	I0528 15:07:39.295884   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:39.295884   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:39.295884   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:39.295884   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:39.296157   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:39.296739   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:39.797238   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:39.797315   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:39.797315   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:39.797315   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:39.797658   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:39.797658   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:39.797658   12872 round_trippers.go:580]     Audit-Id: a384edc1-4a65-47bf-98b1-609f0e7a3422
	I0528 15:07:39.797658   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:39.797658   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:39.797658   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:39.797658   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:39.797658   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:39 GMT
	I0528 15:07:39.801560   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:40.289527   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:40.289527   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:40.289527   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:40.289527   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:40.293998   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:40.294065   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:40.294065   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:40.294065   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:40.294065   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:40 GMT
	I0528 15:07:40.294065   12872 round_trippers.go:580]     Audit-Id: a3ac9530-6864-497b-8977-0ef39a05a18a
	I0528 15:07:40.294065   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:40.294134   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:40.294537   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:40.792893   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:40.792893   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:40.792893   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:40.792893   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:40.793422   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:40.793422   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:40.793422   12872 round_trippers.go:580]     Audit-Id: 422efa6c-d5aa-4516-a2ee-7a81ea1b0355
	I0528 15:07:40.793422   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:40.793422   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:40.793422   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:40.797198   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:40.797198   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:40 GMT
	I0528 15:07:40.797491   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:41.298102   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:41.298102   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:41.298102   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:41.298102   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:41.300861   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:07:41.302798   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:41.302798   12872 round_trippers.go:580]     Audit-Id: 12ec9837-b8dc-4697-b3d6-1a4f15c5e829
	I0528 15:07:41.302798   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:41.302863   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:41.302863   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:41.302863   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:41.302863   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:41 GMT
	I0528 15:07:41.302863   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:41.303412   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:41.792562   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:41.792562   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:41.792562   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:41.792562   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:41.793131   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:41.793131   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:41.796721   12872 round_trippers.go:580]     Audit-Id: fa4e1b80-a0d8-49d1-99ce-c84670112e93
	I0528 15:07:41.796721   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:41.796721   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:41.796764   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:41.796764   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:41.796764   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:41 GMT
	I0528 15:07:41.796798   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:42.282340   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:42.282533   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:42.282560   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:42.282560   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:42.289794   12872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:07:42.291221   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:42.291221   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:42.291297   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:42.291297   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:42.291297   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:42 GMT
	I0528 15:07:42.291297   12872 round_trippers.go:580]     Audit-Id: 72901cda-fa5d-4057-b0a9-91b6b3b27868
	I0528 15:07:42.291297   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:42.291297   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:42.792545   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:42.792545   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:42.792545   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:42.792545   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:42.793174   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:42.797087   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:42.797087   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:42.797087   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:42.797129   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:42 GMT
	I0528 15:07:42.797129   12872 round_trippers.go:580]     Audit-Id: 8c7d81dd-1c75-45f9-b6fe-8e24cee77acb
	I0528 15:07:42.797129   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:42.797129   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:42.797480   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:43.299111   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:43.299111   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:43.299111   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:43.299111   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:43.305255   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:07:43.305323   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:43.305323   12872 round_trippers.go:580]     Audit-Id: 854fbd6e-592c-489e-8b96-ec18cb8b1466
	I0528 15:07:43.305357   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:43.305357   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:43.305357   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:43.305357   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:43.305392   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:43 GMT
	I0528 15:07:43.305560   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:43.305560   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:43.786725   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:43.786793   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:43.786822   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:43.786822   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:43.787441   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:43.790282   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:43.790282   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:43 GMT
	I0528 15:07:43.790282   12872 round_trippers.go:580]     Audit-Id: e36ddec9-df2b-4d28-b306-702bd89a539b
	I0528 15:07:43.790322   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:43.790322   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:43.790322   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:43.790322   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:43.790607   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:44.282190   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:44.282190   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:44.282190   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:44.282190   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:44.286219   12872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:07:44.286219   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:44.286219   12872 round_trippers.go:580]     Audit-Id: c0af6d15-8bf6-46d3-b311-dabe0ca88b7a
	I0528 15:07:44.286219   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:44.286219   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:44.286219   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:44.286219   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:44.286219   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:44 GMT
	I0528 15:07:44.286219   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:44.793102   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:44.793102   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:44.793186   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:44.793186   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:44.803976   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 15:07:44.803976   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:44.812871   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:44.812871   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:44.812871   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:44.812871   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:44 GMT
	I0528 15:07:44.812871   12872 round_trippers.go:580]     Audit-Id: 8ec1b4a8-2f9e-4099-b66a-22e777be8f0c
	I0528 15:07:44.812871   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:44.813249   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:45.282401   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:45.282458   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:45.282458   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:45.282514   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:45.288467   12872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:07:45.288467   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:45.288467   12872 round_trippers.go:580]     Audit-Id: ad22568a-bb11-43b8-8228-e98eaaaa0898
	I0528 15:07:45.288467   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:45.288559   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:45.288559   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:45.288559   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:45.288559   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:45 GMT
	I0528 15:07:45.288758   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:45.785490   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:45.785490   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:45.785490   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:45.785490   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:45.786209   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:45.789759   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:45.789759   12872 round_trippers.go:580]     Audit-Id: 28188e1a-a17b-441b-b014-8c63b90d6fab
	I0528 15:07:45.789820   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:45.789820   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:45.789820   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:45.789820   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:45.789820   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:45 GMT
	I0528 15:07:45.789820   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:45.790549   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:46.286152   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:46.286152   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:46.286152   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:46.286152   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:46.286709   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:46.286709   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:46.286709   12872 round_trippers.go:580]     Audit-Id: 5d1c5fce-0ec6-424b-871d-63c282fa3d32
	I0528 15:07:46.286709   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:46.286709   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:46.286709   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:46.286709   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:46.290197   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:46 GMT
	I0528 15:07:46.290430   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:46.810680   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:46.810747   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:46.810747   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:46.810747   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:46.816916   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:07:46.816916   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:46.816916   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:46.816916   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:46.816916   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:46.816916   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:46 GMT
	I0528 15:07:46.816916   12872 round_trippers.go:580]     Audit-Id: 3782a615-395f-492d-b572-14ee0d83d970
	I0528 15:07:46.816916   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:46.817757   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:47.284420   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:47.284526   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:47.284526   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:47.284526   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:47.284860   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:47.284860   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:47.284860   12872 round_trippers.go:580]     Audit-Id: 51bc0757-df2d-4aa7-9d3d-ac190a20ee0a
	I0528 15:07:47.284860   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:47.284860   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:47.284860   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:47.284860   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:47.284860   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:47 GMT
	I0528 15:07:47.287962   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:47.792356   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:47.792414   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:47.792414   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:47.792414   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:47.796704   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:47.796767   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:47.796767   12872 round_trippers.go:580]     Audit-Id: f981de5e-01eb-48a8-959e-e00d0367ea39
	I0528 15:07:47.796767   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:47.796767   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:47.796767   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:47.796860   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:47.796860   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:47 GMT
	I0528 15:07:47.796994   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:47.797637   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:48.285434   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:48.285740   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:48.285826   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:48.285826   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:48.286564   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:48.286564   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:48.289919   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:48.289919   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:48.289919   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:48.289919   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:48.289919   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:48 GMT
	I0528 15:07:48.289919   12872 round_trippers.go:580]     Audit-Id: e1151ffe-bb83-4907-aa5d-47612604c856
	I0528 15:07:48.290160   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:48.782252   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:48.782341   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:48.782539   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:48.782539   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:48.783324   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:48.783324   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:48.786419   12872 round_trippers.go:580]     Audit-Id: 7c21b318-a9ef-4792-adb3-28abaa63bd3b
	I0528 15:07:48.786419   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:48.786419   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:48.786419   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:48.786419   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:48.786419   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:48 GMT
	I0528 15:07:48.786745   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:49.287710   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:49.287710   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:49.287710   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:49.287710   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:49.298121   12872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 15:07:49.298694   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:49.298694   12872 round_trippers.go:580]     Audit-Id: a094096f-29ac-41c7-952e-5434f6926858
	I0528 15:07:49.298694   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:49.298754   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:49.298754   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:49.298754   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:49.298754   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:49 GMT
	I0528 15:07:49.298916   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:49.792476   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:49.792546   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:49.792546   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:49.792546   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:49.793031   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:49.793031   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:49.793031   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:49.796945   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:49.796945   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:49.796945   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:49 GMT
	I0528 15:07:49.796945   12872 round_trippers.go:580]     Audit-Id: 85bc801a-7e5a-482d-872c-67c36977ac69
	I0528 15:07:49.796945   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:49.797227   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:49.797699   12872 node_ready.go:53] node "multinode-517300-m02" has status "Ready":"False"
	I0528 15:07:50.282566   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:50.282566   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:50.282566   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:50.282839   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:50.283373   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:50.283373   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:50.283373   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:50.283373   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:50 GMT
	I0528 15:07:50.283373   12872 round_trippers.go:580]     Audit-Id: 89272336-0578-42f8-811f-23f3d1ef5d6e
	I0528 15:07:50.283373   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:50.283373   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:50.283373   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:50.286977   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:50.792672   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:50.792909   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:50.792909   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:50.792909   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:50.793468   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:50.797688   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:50.797688   12872 round_trippers.go:580]     Audit-Id: cdf37ffa-4a56-4e3f-bfa2-0753590affee
	I0528 15:07:50.797688   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:50.797688   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:50.797688   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:50.797688   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:50.797688   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:50 GMT
	I0528 15:07:50.798074   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"628","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3397 chars]
	I0528 15:07:51.294779   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:51.294893   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.294964   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.294964   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.295268   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.298029   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.298106   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.298106   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.298106   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.298106   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.298106   12872 round_trippers.go:580]     Audit-Id: 73554efc-515f-4de4-bf38-bc0f4a0a391e
	I0528 15:07:51.298106   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.298106   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"656","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0528 15:07:51.298827   12872 node_ready.go:49] node "multinode-517300-m02" has status "Ready":"True"
	I0528 15:07:51.298827   12872 node_ready.go:38] duration metric: took 23.0173944s for node "multinode-517300-m02" to be "Ready" ...
	I0528 15:07:51.298827   12872 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:07:51.298827   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods
	I0528 15:07:51.298827   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.298827   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.298827   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.299518   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.299518   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.303195   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.303195   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.303195   12872 round_trippers.go:580]     Audit-Id: 122c2efd-2dd3-4f71-923a-8531c6f26720
	I0528 15:07:51.303195   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.303195   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.303195   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.304310   12872 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"656"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"447","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70476 chars]
	I0528 15:07:51.307601   12872 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.307673   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:07:51.307782   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.307782   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.307782   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.309541   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:51.309541   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.309541   12872 round_trippers.go:580]     Audit-Id: 1088dbb6-60cb-4c89-be4d-7663c5849bb7
	I0528 15:07:51.309541   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.309541   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.309541   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.309541   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.309541   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.311793   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"447","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0528 15:07:51.311849   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:51.311849   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.311849   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.311849   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.315464   12872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:07:51.315831   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.315831   12872 round_trippers.go:580]     Audit-Id: ce486499-b565-4035-8086-301f5f3d7422
	I0528 15:07:51.315831   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.315831   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.315831   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.315831   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.315831   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.315831   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:51.316692   12872 pod_ready.go:92] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:51.316692   12872 pod_ready.go:81] duration metric: took 9.0906ms for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.316692   12872 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.316692   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:07:51.316692   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.316692   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.316692   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.318583   12872 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:07:51.318583   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.318583   12872 round_trippers.go:580]     Audit-Id: 47428d77-b5ec-4de7-b32a-d8c78f0855f4
	I0528 15:07:51.320682   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.320682   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.320682   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.320682   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.320682   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.320998   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8849c8b3-7ff5-42a1-856e-ce9d64467dea","resourceVersion":"323","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.214.123:2379","kubernetes.io/config.hash":"566cca5e4c6424d0dbd4fbafe29461a9","kubernetes.io/config.mirror":"566cca5e4c6424d0dbd4fbafe29461a9","kubernetes.io/config.seen":"2024-05-28T22:04:18.078849689Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0528 15:07:51.321534   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:51.321745   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.321864   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.321940   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.322672   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.322672   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.322672   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.322672   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.322672   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.322672   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.322672   12872 round_trippers.go:580]     Audit-Id: 36f8be1b-22f5-4a6e-ba41-05a20cc4e1c3
	I0528 15:07:51.325120   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.325352   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:51.325352   12872 pod_ready.go:92] pod "etcd-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:51.325352   12872 pod_ready.go:81] duration metric: took 8.6604ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.325352   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.325352   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:07:51.325957   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.325957   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.325957   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.326384   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.329142   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.329142   12872 round_trippers.go:580]     Audit-Id: 570b2c1e-4030-43bc-be7b-6553378ce510
	I0528 15:07:51.329142   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.329142   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.329142   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.329142   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.329142   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.329142   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"1b776372-ea40-408c-99bb-a0d05e620038","resourceVersion":"326","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.214.123:8443","kubernetes.io/config.hash":"52f10060875936c1839f4d75318e7457","kubernetes.io/config.mirror":"52f10060875936c1839f4d75318e7457","kubernetes.io/config.seen":"2024-05-28T22:04:18.078855589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0528 15:07:51.329943   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:51.329943   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.329943   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.329943   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.335993   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:07:51.335993   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.335993   12872 round_trippers.go:580]     Audit-Id: 09b8d19c-ca6f-44a5-b302-f441ff72fd17
	I0528 15:07:51.335993   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.335993   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.335993   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.335993   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.335993   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.336763   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:51.337551   12872 pod_ready.go:92] pod "kube-apiserver-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:51.337551   12872 pod_ready.go:81] duration metric: took 12.1982ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.337551   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.337551   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:07:51.337551   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.337551   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.337551   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.340543   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:07:51.340543   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.340543   12872 round_trippers.go:580]     Audit-Id: f452c177-4882-415d-94b7-4545a3e5061d
	I0528 15:07:51.340543   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.340543   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.340543   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.340543   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.340543   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.340543   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"320","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0528 15:07:51.341257   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:51.341257   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.341257   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.341257   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.344272   12872 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:07:51.344272   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.344272   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.344272   12872 round_trippers.go:580]     Audit-Id: 891be991-d585-4387-93d6-8496be146998
	I0528 15:07:51.344272   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.344272   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.344272   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.344272   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.344272   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:51.344837   12872 pod_ready.go:92] pod "kube-controller-manager-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:51.344916   12872 pod_ready.go:81] duration metric: took 7.3653ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.344916   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.500427   12872 request.go:629] Waited for 155.3009ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:07:51.500624   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:07:51.500624   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.500705   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.500705   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.501009   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.501009   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.501009   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.501009   12872 round_trippers.go:580]     Audit-Id: b9b9fffc-8ee0-4db3-832c-116cd8b062e1
	I0528 15:07:51.501009   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.501009   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.501009   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.501009   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.505892   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ckbkv","generateName":"kube-proxy-","namespace":"kube-system","uid":"13ccca53-976a-4839-be94-d3960388ebfa","resourceVersion":"637","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0528 15:07:51.704893   12872 request.go:629] Waited for 198.1692ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:51.705212   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:07:51.705273   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.705331   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.705331   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.706198   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.710419   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.710419   12872 round_trippers.go:580]     Audit-Id: 70c853ac-9f8b-47a6-9730-2fd0fce61f72
	I0528 15:07:51.710658   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.710658   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.710658   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.710658   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.710658   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.710658   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"656","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3143 chars]
	I0528 15:07:51.711311   12872 pod_ready.go:92] pod "kube-proxy-ckbkv" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:51.711311   12872 pod_ready.go:81] duration metric: took 366.3937ms for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.711311   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:51.908189   12872 request.go:629] Waited for 196.5523ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:07:51.908411   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:07:51.908411   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:51.908411   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:51.908411   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:51.909068   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:51.912844   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:51.912844   12872 round_trippers.go:580]     Audit-Id: ae19a8a6-116b-4e5d-b0c8-bc90dcaf32a7
	I0528 15:07:51.912927   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:51.912927   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:51.912927   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:51.912927   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:51.912927   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:51 GMT
	I0528 15:07:51.912927   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"408","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0528 15:07:52.103079   12872 request.go:629] Waited for 188.8193ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:52.103351   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:52.103351   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:52.103351   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:52.103351   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:52.106700   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:52.106774   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:52.106774   12872 round_trippers.go:580]     Audit-Id: 37e11648-10a3-4425-af15-087232c4d915
	I0528 15:07:52.106805   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:52.106805   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:52.106805   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:52.106805   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:52.106805   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:52 GMT
	I0528 15:07:52.106805   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:52.107635   12872 pod_ready.go:92] pod "kube-proxy-hbbq2" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:52.107635   12872 pod_ready.go:81] duration metric: took 396.3229ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:52.107682   12872 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:52.298348   12872 request.go:629] Waited for 190.4454ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:07:52.298348   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:07:52.298348   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:52.298348   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:52.298348   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:52.298954   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:52.307640   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:52.307640   12872 round_trippers.go:580]     Audit-Id: e03290ee-74d8-4fc1-a634-1a55c34139b6
	I0528 15:07:52.307640   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:52.307640   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:52.307640   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:52.307947   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:52.308011   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:52 GMT
	I0528 15:07:52.308212   12872 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"318","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0528 15:07:52.507696   12872 request.go:629] Waited for 198.3546ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:52.507874   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes/multinode-517300
	I0528 15:07:52.507874   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:52.507874   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:52.507874   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:52.512487   12872 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:07:52.512624   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:52.512688   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:52.512688   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:52.512688   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:52.512688   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:52 GMT
	I0528 15:07:52.512688   12872 round_trippers.go:580]     Audit-Id: 6d6bd048-1161-457c-9cdd-aa3604a0f02b
	I0528 15:07:52.512688   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:52.512688   12872 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0528 15:07:52.513407   12872 pod_ready.go:92] pod "kube-scheduler-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:07:52.513951   12872 pod_ready.go:81] duration metric: took 406.2679ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:07:52.513951   12872 pod_ready.go:38] duration metric: took 1.2151204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:07:52.514199   12872 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 15:07:52.529405   12872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:07:52.553516   12872 system_svc.go:56] duration metric: took 39.3465ms WaitForService to wait for kubelet
	I0528 15:07:52.553516   12872 kubeadm.go:576] duration metric: took 24.504467s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:07:52.553516   12872 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:07:52.695160   12872 request.go:629] Waited for 141.6437ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.214.123:8443/api/v1/nodes
	I0528 15:07:52.695472   12872 round_trippers.go:463] GET https://172.18.214.123:8443/api/v1/nodes
	I0528 15:07:52.695472   12872 round_trippers.go:469] Request Headers:
	I0528 15:07:52.695472   12872 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:07:52.695472   12872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:07:52.701809   12872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:07:52.701899   12872 round_trippers.go:577] Response Headers:
	I0528 15:07:52.701899   12872 round_trippers.go:580]     Content-Type: application/json
	I0528 15:07:52.701899   12872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:07:52.701899   12872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:07:52.701942   12872 round_trippers.go:580]     Date: Tue, 28 May 2024 22:07:52 GMT
	I0528 15:07:52.701942   12872 round_trippers.go:580]     Audit-Id: da18a21c-b3fe-4e3c-a8c0-980eabaeb76a
	I0528 15:07:52.701942   12872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:07:52.703020   12872 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"658"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"457","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9148 chars]
	I0528 15:07:52.703708   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:07:52.703743   12872 node_conditions.go:123] node cpu capacity is 2
	I0528 15:07:52.703743   12872 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:07:52.703743   12872 node_conditions.go:123] node cpu capacity is 2
	I0528 15:07:52.703743   12872 node_conditions.go:105] duration metric: took 150.2267ms to run NodePressure ...
	I0528 15:07:52.703743   12872 start.go:240] waiting for startup goroutines ...
	I0528 15:07:52.703743   12872 start.go:254] writing updated cluster config ...
	I0528 15:07:52.717606   12872 ssh_runner.go:195] Run: rm -f paused
	I0528 15:07:52.866609   12872 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 15:07:52.870533   12872 out.go:177] * Done! kubectl is now configured to use "multinode-517300" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.049834958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.067208053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.067435353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.067536452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.068040452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 cri-dockerd[1223]: time="2024-05-28T22:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/879c412c7f4559b86929a90f0507a564135a2f887bad3174f45a1cf381624050/resolv.conf as [nameserver 172.18.208.1]"
	May 28 22:04:43 multinode-517300 cri-dockerd[1223]: time="2024-05-28T22:04:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/666496aef7c5019a9b0fb96cdbdeaf270fd9423d0664bb4ff00eb935f13745e4/resolv.conf as [nameserver 172.18.208.1]"
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.428971852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.430138152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.430514852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.432832951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.632645106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.634919306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.634960106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:04:43 multinode-517300 dockerd[1322]: time="2024-05-28T22:04:43.635245506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:08:17 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:17.252478131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:08:17 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:17.252878632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:08:17 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:17.252901832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:08:17 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:17.253005432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:08:17 multinode-517300 cri-dockerd[1223]: time="2024-05-28T22:08:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/120fc411395d2ac07f0cbec5ecc83982310c83a3e1af7e99e35426482608689c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 28 22:08:18 multinode-517300 cri-dockerd[1223]: time="2024-05-28T22:08:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 28 22:08:18 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:18.931995021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:08:18 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:18.932507021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:08:18 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:18.932701322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:08:18 multinode-517300 dockerd[1322]: time="2024-05-28T22:08:18.933023322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bf13b101a11ff       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   120fc411395d2       busybox-fc5497c4f-r5pc4
	46e6bf352b791       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   666496aef7c50       coredns-7db6d8ff4d-p9vb9
	74720800aab45       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   879c412c7f455       storage-provisioner
	3799d49aa0d84       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   3f800c86d1d40       kindnet-rmwrj
	ec6e090d8a58e       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   d1b2ab67f8e32       kube-proxy-hbbq2
	1b70e592d21ff       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   0                   453e3ca7e2c8f       kube-controller-manager-multinode-517300
	b44c8d48ef931       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   3b2820676d5c2       etcd-multinode-517300
	b21e4a41b2feb       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            0                   74bfa22b48f31       kube-apiserver-multinode-517300
	d79040b216c8f       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            0                   de5c7dce36831       kube-scheduler-multinode-517300
	
	
	==> coredns [46e6bf352b79] <==
	[INFO] 10.244.1.2:45618 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001179s
	[INFO] 10.244.0.3:59346 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141101s
	[INFO] 10.244.0.3:47223 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001462s
	[INFO] 10.244.0.3:50815 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001192s
	[INFO] 10.244.0.3:46626 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071101s
	[INFO] 10.244.0.3:42764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001483s
	[INFO] 10.244.0.3:51566 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0004042s
	[INFO] 10.244.0.3:58939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002092s
	[INFO] 10.244.0.3:41939 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000621s
	[INFO] 10.244.1.2:57605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165901s
	[INFO] 10.244.1.2:44917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000843s
	[INFO] 10.244.1.2:33167 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000577s
	[INFO] 10.244.1.2:51141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000829s
	[INFO] 10.244.0.3:58939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002064s
	[INFO] 10.244.0.3:58515 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000665s
	[INFO] 10.244.0.3:59330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000432901s
	[INFO] 10.244.0.3:60269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000599s
	[INFO] 10.244.1.2:33516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001233s
	[INFO] 10.244.1.2:42500 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000374701s
	[INFO] 10.244.1.2:51170 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001075s
	[INFO] 10.244.1.2:42605 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000851s
	[INFO] 10.244.0.3:40521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001483s
	[INFO] 10.244.0.3:45749 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000925501s
	[INFO] 10.244.0.3:42750 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223301s
	[INFO] 10.244.0.3:53798 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082901s
	
	
	==> describe nodes <==
	Name:               multinode-517300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-517300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T15_04_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:04:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517300
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:09:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:08:54 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:08:54 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:08:54 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:08:54 +0000   Tue, 28 May 2024 22:04:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.214.123
	  Hostname:    multinode-517300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 93e5d5841b8040b486df86269a57aea2
	  System UUID:                73fc560d-17a3-fb45-ad7a-d3ae5b4eb5c5
	  Boot ID:                    baad6a6d-9df4-4aaa-881e-ba51e93869f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r5pc4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 coredns-7db6d8ff4d-p9vb9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 etcd-multinode-517300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 kindnet-rmwrj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m35s
	  kube-system                 kube-apiserver-multinode-517300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-multinode-517300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-hbbq2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-multinode-517300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node multinode-517300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node multinode-517300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node multinode-517300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s                  kubelet          Node multinode-517300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s                  kubelet          Node multinode-517300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s                  kubelet          Node multinode-517300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m35s                  node-controller  Node multinode-517300 event: Registered Node multinode-517300 in Controller
	  Normal  NodeReady                4m24s                  kubelet          Node multinode-517300 status is now: NodeReady
	
	
	Name:               multinode-517300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-517300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T15_07_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:08:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:08:28 +0000   Tue, 28 May 2024 22:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:08:28 +0000   Tue, 28 May 2024 22:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:08:28 +0000   Tue, 28 May 2024 22:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:08:28 +0000   Tue, 28 May 2024 22:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.223.89
	  Hostname:    multinode-517300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7131ddbbf859438a85bb64ac53c68e52
	  System UUID:                1e55cbce-9555-f041-9232-dd950f326fbc
	  Boot ID:                    f4f14f60-2d99-4b08-a756-91943b25705e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7bglt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-jf49h              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      99s
	  kube-system                 kube-proxy-ckbkv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x2 over 99s)  kubelet          Node multinode-517300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x2 over 99s)  kubelet          Node multinode-517300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x2 over 99s)  kubelet          Node multinode-517300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                node-controller  Node multinode-517300-m02 event: Registered Node multinode-517300-m02 in Controller
	  Normal  NodeReady                75s                kubelet          Node multinode-517300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.968964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 22:03] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.167742] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +31.020650] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.105588] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.554032] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.203859] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.233990] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +2.783164] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
	[  +0.216717] systemd-fstab-generator[1188]: Ignoring "noauto" option for root device
	[  +0.191966] systemd-fstab-generator[1200]: Ignoring "noauto" option for root device
	[  +0.289359] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[ +11.575411] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.102608] kauditd_printk_skb: 205 callbacks suppressed
	[May28 22:04] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[  +7.500475] systemd-fstab-generator[1713]: Ignoring "noauto" option for root device
	[  +0.091175] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.050700] systemd-fstab-generator[2122]: Ignoring "noauto" option for root device
	[  +0.127871] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.692802] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +0.223692] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.126716] kauditd_printk_skb: 51 callbacks suppressed
	[May28 22:08] kauditd_printk_skb: 23 callbacks suppressed
	
	
	==> etcd [b44c8d48ef93] <==
	{"level":"info","ts":"2024-05-28T22:04:12.639443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e802fd86460e324 elected leader 4e802fd86460e324 at term 2"}
	{"level":"info","ts":"2024-05-28T22:04:12.644768Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:04:12.649904Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4e802fd86460e324","local-member-attributes":"{Name:multinode-517300 ClientURLs:[https://172.18.214.123:2379]}","request-path":"/0/members/4e802fd86460e324/attributes","cluster-id":"b71e968195641782","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T22:04:12.650369Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:04:12.650977Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:04:12.65141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T22:04:12.653677Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T22:04:12.663823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.214.123:2379"}
	{"level":"info","ts":"2024-05-28T22:04:12.677916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b71e968195641782","local-member-id":"4e802fd86460e324","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:04:12.678177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:04:12.678407Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:04:12.678681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T22:04:56.015058Z","caller":"traceutil/trace.go:171","msg":"trace[1054268755] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"123.091179ms","start":"2024-05-28T22:04:55.891944Z","end":"2024-05-28T22:04:56.015035Z","steps":["trace[1054268755] 'process raft request'  (duration: 122.849279ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T22:07:31.379551Z","caller":"traceutil/trace.go:171","msg":"trace[979713909] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"148.267584ms","start":"2024-05-28T22:07:31.231268Z","end":"2024-05-28T22:07:31.379536Z","steps":["trace[979713909] 'process raft request'  (duration: 148.144483ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T22:07:31.471061Z","caller":"traceutil/trace.go:171","msg":"trace[1002212830] linearizableReadLoop","detail":"{readStateIndex:668; appliedIndex:667; }","duration":"166.932333ms","start":"2024-05-28T22:07:31.304109Z","end":"2024-05-28T22:07:31.471042Z","steps":["trace[1002212830] 'read index received'  (duration: 75.638096ms)","trace[1002212830] 'applied index is now lower than readState.Index'  (duration: 91.293437ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T22:07:31.471299Z","caller":"traceutil/trace.go:171","msg":"trace[427496682] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"233.967006ms","start":"2024-05-28T22:07:31.237321Z","end":"2024-05-28T22:07:31.471288Z","steps":["trace[427496682] 'process raft request'  (duration: 199.263216ms)","trace[427496682] 'compare'  (duration: 34.190089ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T22:07:31.471689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.632934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-517300-m02\" ","response":"range_response_count:1 size:2847"}
	{"level":"info","ts":"2024-05-28T22:07:31.472206Z","caller":"traceutil/trace.go:171","msg":"trace[1789489090] range","detail":"{range_begin:/registry/minions/multinode-517300-m02; range_end:; response_count:1; response_revision:620; }","duration":"168.220836ms","start":"2024-05-28T22:07:31.303974Z","end":"2024-05-28T22:07:31.472195Z","steps":["trace[1789489090] 'agreement among raft nodes before linearized reading'  (duration: 167.635534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:07:31.471989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.895113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-05-28T22:07:31.473271Z","caller":"traceutil/trace.go:171","msg":"trace[472381155] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:620; }","duration":"122.196716ms","start":"2024-05-28T22:07:31.351062Z","end":"2024-05-28T22:07:31.473259Z","steps":["trace[472381155] 'agreement among raft nodes before linearized reading'  (duration: 120.822813ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T22:07:31.771768Z","caller":"traceutil/trace.go:171","msg":"trace[826495935] linearizableReadLoop","detail":"{readStateIndex:669; appliedIndex:668; }","duration":"119.788111ms","start":"2024-05-28T22:07:31.651962Z","end":"2024-05-28T22:07:31.77175Z","steps":["trace[826495935] 'read index received'  (duration: 76.661299ms)","trace[826495935] 'applied index is now lower than readState.Index'  (duration: 43.126212ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T22:07:31.771866Z","caller":"traceutil/trace.go:171","msg":"trace[574423982] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"288.105746ms","start":"2024-05-28T22:07:31.483748Z","end":"2024-05-28T22:07:31.771854Z","steps":["trace[574423982] 'process raft request'  (duration: 244.976834ms)","trace[574423982] 'compare'  (duration: 42.872311ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T22:07:31.771962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.966711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:07:31.774105Z","caller":"traceutil/trace.go:171","msg":"trace[452996457] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:621; }","duration":"122.181417ms","start":"2024-05-28T22:07:31.651912Z","end":"2024-05-28T22:07:31.774093Z","steps":["trace[452996457] 'agreement among raft nodes before linearized reading'  (duration: 119.996111ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T22:07:38.020263Z","caller":"traceutil/trace.go:171","msg":"trace[783441741] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"207.968943ms","start":"2024-05-28T22:07:37.812277Z","end":"2024-05-28T22:07:38.020246Z","steps":["trace[783441741] 'process raft request'  (duration: 207.862242ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:09:06 up 6 min,  0 users,  load average: 0.25, 0.33, 0.19
	Linux multinode-517300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3799d49aa0d8] <==
	I0528 22:08:00.894114       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:08:10.905316       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:08:10.905589       1 main.go:227] handling current node
	I0528 22:08:10.905701       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:08:10.905792       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:08:20.911593       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:08:20.911752       1 main.go:227] handling current node
	I0528 22:08:20.911769       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:08:20.911778       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:08:30.918321       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:08:30.918426       1 main.go:227] handling current node
	I0528 22:08:30.918442       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:08:30.918460       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:08:40.934023       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:08:40.934120       1 main.go:227] handling current node
	I0528 22:08:40.934134       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:08:40.934140       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:08:50.940329       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:08:50.940427       1 main.go:227] handling current node
	I0528 22:08:50.940441       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:08:50.940447       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:09:00.955147       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:09:00.955192       1 main.go:227] handling current node
	I0528 22:09:00.955204       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:09:00.955210       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b21e4a41b2fe] <==
	I0528 22:04:15.828517       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0528 22:04:15.839252       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0528 22:04:15.839295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 22:04:17.022403       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 22:04:17.116012       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 22:04:17.257070       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0528 22:04:17.290571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.214.123]
	I0528 22:04:17.291857       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 22:04:17.302570       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 22:04:17.889710       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 22:04:18.092340       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 22:04:18.152384       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0528 22:04:18.187844       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 22:04:31.790060       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0528 22:04:32.044894       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0528 22:08:22.342418       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54812: use of closed network connection
	E0528 22:08:22.788162       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54814: use of closed network connection
	E0528 22:08:23.298941       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54816: use of closed network connection
	E0528 22:08:23.741319       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54818: use of closed network connection
	E0528 22:08:24.220171       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54820: use of closed network connection
	E0528 22:08:24.639952       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54822: use of closed network connection
	E0528 22:08:25.393320       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54825: use of closed network connection
	E0528 22:08:35.814012       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54827: use of closed network connection
	E0528 22:08:36.228963       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54830: use of closed network connection
	E0528 22:08:46.657793       1 conn.go:339] Error on socket receive: read tcp 172.18.214.123:8443->172.18.208.1:54832: use of closed network connection
	
	
	==> kube-controller-manager [1b70e592d21f] <==
	I0528 22:04:31.803395       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 22:04:32.339795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="283.698638ms"
	I0528 22:04:32.376974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.845675ms"
	I0528 22:04:32.377072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.5µs"
	I0528 22:04:33.464209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.427703ms"
	I0528 22:04:33.493490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.981213ms"
	I0528 22:04:33.493823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="78.7µs"
	I0528 22:04:33.493962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.899µs"
	I0528 22:04:42.451830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.6µs"
	I0528 22:04:42.498643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58µs"
	I0528 22:04:44.604809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.7µs"
	I0528 22:04:44.667771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.610006ms"
	I0528 22:04:44.668202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="200.5µs"
	I0528 22:04:46.195642       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0528 22:07:27.213938       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-517300-m02\" does not exist"
	I0528 22:07:27.226042       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-517300-m02" podCIDRs=["10.244.1.0/24"]
	I0528 22:07:31.228345       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300-m02"
	I0528 22:07:51.174806       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:08:16.713461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.59304ms"
	I0528 22:08:16.752067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.939402ms"
	I0528 22:08:16.752155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.5µs"
	I0528 22:08:19.527410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.489518ms"
	I0528 22:08:19.527571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.1µs"
	I0528 22:08:19.942951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.880513ms"
	I0528 22:08:19.943273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41µs"
	
	
	==> kube-proxy [ec6e090d8a58] <==
	I0528 22:04:33.388317       1 server_linux.go:69] "Using iptables proxy"
	I0528 22:04:33.409384       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.214.123"]
	I0528 22:04:33.519730       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 22:04:33.519856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 22:04:33.519880       1 server_linux.go:165] "Using iptables Proxier"
	I0528 22:04:33.524815       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 22:04:33.525225       1 server.go:872] "Version info" version="v1.30.1"
	I0528 22:04:33.525314       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:04:33.527247       1 config.go:192] "Starting service config controller"
	I0528 22:04:33.534238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 22:04:33.528207       1 config.go:319] "Starting node config controller"
	I0528 22:04:33.534645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 22:04:33.534886       1 shared_informer.go:320] Caches are synced for node config
	I0528 22:04:33.527637       1 config.go:101] "Starting endpoint slice config controller"
	I0528 22:04:33.535296       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 22:04:33.535487       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 22:04:33.535760       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d79040b216c8] <==
	W0528 22:04:15.906539       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 22:04:15.906594       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:04:15.974569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:15.974920       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:15.978379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 22:04:15.978603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 22:04:15.982465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 22:04:15.982743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 22:04:16.016813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:04:16.018026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 22:04:16.098521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 22:04:16.098961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 22:04:16.160174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 22:04:16.160284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 22:04:16.185917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:16.186824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:16.300433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:16.300744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:16.463944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 22:04:16.464359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 22:04:16.521892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 22:04:16.521995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 22:04:16.544827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:04:16.545019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0528 22:04:17.846264       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:04:42 multinode-517300 kubelet[2129]: I0528 22:04:42.606935    2129 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpwj9\" (UniqueName: \"kubernetes.io/projected/ab978e2d-322a-4207-9539-d2d92a25e566-kube-api-access-hpwj9\") pod \"storage-provisioner\" (UID: \"ab978e2d-322a-4207-9539-d2d92a25e566\") " pod="kube-system/storage-provisioner"
	May 28 22:04:44 multinode-517300 kubelet[2129]: I0528 22:04:44.603491    2129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p9vb9" podStartSLOduration=12.603469262 podStartE2EDuration="12.603469262s" podCreationTimestamp="2024-05-28 22:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 22:04:44.603299462 +0000 UTC m=+26.669974509" watchObservedRunningTime="2024-05-28 22:04:44.603469262 +0000 UTC m=+26.670144309"
	May 28 22:05:18 multinode-517300 kubelet[2129]: E0528 22:05:18.180474    2129 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:05:18 multinode-517300 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:05:18 multinode-517300 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:05:18 multinode-517300 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:05:18 multinode-517300 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:06:18 multinode-517300 kubelet[2129]: E0528 22:06:18.176892    2129 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:06:18 multinode-517300 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:06:18 multinode-517300 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:06:18 multinode-517300 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:06:18 multinode-517300 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:07:18 multinode-517300 kubelet[2129]: E0528 22:07:18.177269    2129 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:07:18 multinode-517300 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:07:18 multinode-517300 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:07:18 multinode-517300 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:07:18 multinode-517300 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:08:16 multinode-517300 kubelet[2129]: I0528 22:08:16.696879    2129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=216.696849839 podStartE2EDuration="3m36.696849839s" podCreationTimestamp="2024-05-28 22:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 22:04:44.675055678 +0000 UTC m=+26.741730625" watchObservedRunningTime="2024-05-28 22:08:16.696849839 +0000 UTC m=+238.763524886"
	May 28 22:08:16 multinode-517300 kubelet[2129]: I0528 22:08:16.697417    2129 topology_manager.go:215] "Topology Admit Handler" podUID="ea8bd23f-d46d-4856-89b0-b4455180e6b6" podNamespace="default" podName="busybox-fc5497c4f-r5pc4"
	May 28 22:08:16 multinode-517300 kubelet[2129]: I0528 22:08:16.844835    2129 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtf2\" (UniqueName: \"kubernetes.io/projected/ea8bd23f-d46d-4856-89b0-b4455180e6b6-kube-api-access-qbtf2\") pod \"busybox-fc5497c4f-r5pc4\" (UID: \"ea8bd23f-d46d-4856-89b0-b4455180e6b6\") " pod="default/busybox-fc5497c4f-r5pc4"
	May 28 22:08:18 multinode-517300 kubelet[2129]: E0528 22:08:18.175291    2129 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:08:18 multinode-517300 kubelet[2129]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:08:18 multinode-517300 kubelet[2129]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:08:18 multinode-517300 kubelet[2129]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:08:18 multinode-517300 kubelet[2129]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:08:58.452463    7036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-517300 -n multinode-517300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-517300 -n multinode-517300: (11.8226494s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-517300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (464.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-517300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-517300
E0528 15:25:06.457400    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-517300: (1m35.6811533s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-517300 --wait=true -v=8 --alsologtostderr
E0528 15:26:32.912997    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 15:28:29.675343    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 15:30:06.441299    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-517300 --wait=true -v=8 --alsologtostderr: exit status 90 (5m33.3194753s)

                                                
                                                
-- stdout --
	* [multinode-517300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-517300" primary control-plane node in "multinode-517300" cluster
	* Restarting existing hyperv VM for "multinode-517300" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-517300-m02" worker node in "multinode-517300" cluster
	* Restarting existing hyperv VM for "multinode-517300-m02" ...
	* Found network options:
	  - NO_PROXY=172.18.212.160
	  - NO_PROXY=172.18.212.160
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:25:24.791250   14720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 15:25:24.793120   14720 out.go:291] Setting OutFile to fd 744 ...
	I0528 15:25:24.793120   14720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:25:24.793120   14720 out.go:304] Setting ErrFile to fd 664...
	I0528 15:25:24.793120   14720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:25:24.820227   14720 out.go:298] Setting JSON to false
	I0528 15:25:24.821968   14720 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7889,"bootTime":1716927235,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 15:25:24.821968   14720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 15:25:24.941113   14720 out.go:177] * [multinode-517300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 15:25:24.970595   14720 notify.go:220] Checking for updates...
	I0528 15:25:25.000722   14720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:25:25.019859   14720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 15:25:25.072083   14720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 15:25:25.079644   14720 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 15:25:25.106400   14720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 15:25:25.115758   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:25:25.116040   14720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 15:25:30.367697   14720 out.go:177] * Using the hyperv driver based on existing profile
	I0528 15:25:30.397321   14720 start.go:297] selected driver: hyperv
	I0528 15:25:30.404488   14720 start.go:901] validating driver "hyperv" against &{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:25:30.405100   14720 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 15:25:30.458688   14720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:25:30.458688   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:25:30.458688   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:25:30.458688   14720 start.go:340] cluster config:
	{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:25:30.458688   14720 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 15:25:30.556966   14720 out.go:177] * Starting "multinode-517300" primary control-plane node in "multinode-517300" cluster
	I0528 15:25:30.560205   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:25:30.560606   14720 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 15:25:30.560606   14720 cache.go:56] Caching tarball of preloaded images
	I0528 15:25:30.561141   14720 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:25:30.561340   14720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:25:30.561636   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:25:30.564285   14720 start.go:360] acquireMachinesLock for multinode-517300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:25:30.564285   14720 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-517300"
	I0528 15:25:30.564843   14720 start.go:96] Skipping create...Using existing machine configuration
	I0528 15:25:30.564843   14720 fix.go:54] fixHost starting: 
	I0528 15:25:30.565756   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:33.198728   14720 main.go:141] libmachine: [stdout =====>] : Off
	
	I0528 15:25:33.198728   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:33.198728   14720 fix.go:112] recreateIfNeeded on multinode-517300: state=Stopped err=<nil>
	W0528 15:25:33.198728   14720 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 15:25:33.259705   14720 out.go:177] * Restarting existing hyperv VM for "multinode-517300" ...
	I0528 15:25:33.301807   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300
	I0528 15:25:36.370835   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:36.370835   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:36.370835   14720 main.go:141] libmachine: Waiting for host to start...
	I0528 15:25:36.370990   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:38.603132   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:38.603132   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:38.603223   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:41.047045   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:41.047045   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:42.047954   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:44.242847   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:44.253935   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:44.253935   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:46.657784   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:46.657784   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:47.669960   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:49.798233   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:49.798233   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:49.800636   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:52.224274   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:52.228477   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:53.231006   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:55.369449   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:55.369449   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:55.374368   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:57.809941   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:57.809941   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:58.828940   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:00.956568   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:00.956568   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:00.967839   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:03.423642   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:03.425986   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:03.428609   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:05.465538   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:05.476821   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:05.476947   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:07.885570   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:07.885570   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:07.897099   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:26:07.899911   14720 machine.go:94] provisionDockerMachine start ...
	I0528 15:26:07.899911   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:09.990958   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:09.990958   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:09.991114   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:12.467912   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:12.467912   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:12.474218   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:12.474419   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:12.474419   14720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:26:12.602030   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:26:12.602292   14720 buildroot.go:166] provisioning hostname "multinode-517300"
	I0528 15:26:12.602382   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:14.681328   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:14.681328   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:14.692586   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:17.154326   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:17.154326   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:17.172216   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:17.172741   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:17.172741   14720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300 && echo "multinode-517300" | sudo tee /etc/hostname
	I0528 15:26:17.323860   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300
	
	I0528 15:26:17.323942   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:19.394560   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:19.394560   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:19.394856   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:21.842142   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:21.842142   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:21.857046   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:21.857942   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:21.857942   14720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:26:21.992394   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:26:21.992394   14720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:26:21.992394   14720 buildroot.go:174] setting up certificates
	I0528 15:26:21.992394   14720 provision.go:84] configureAuth start
	I0528 15:26:21.992394   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:24.066357   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:24.066357   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:24.082083   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:26.535800   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:26.535800   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:26.546807   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:31.019244   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:31.019244   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:31.019458   14720 provision.go:143] copyHostCerts
	I0528 15:26:31.019716   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:26:31.020471   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:26:31.020471   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:26:31.021228   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:26:31.022957   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:26:31.023405   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:26:31.023545   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:26:31.024015   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:26:31.025597   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:26:31.025992   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:26:31.025992   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:26:31.026532   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:26:31.027345   14720 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300 san=[127.0.0.1 172.18.212.160 localhost minikube multinode-517300]
	I0528 15:26:31.173694   14720 provision.go:177] copyRemoteCerts
	I0528 15:26:31.185623   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:26:31.185803   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:33.210400   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:33.210400   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:33.222187   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:35.698699   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:35.698699   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:35.699042   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:26:35.798411   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.612684s)
	I0528 15:26:35.798559   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:26:35.799250   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:26:35.840364   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:26:35.840364   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0528 15:26:35.884607   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:26:35.885139   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 15:26:35.925303   14720 provision.go:87] duration metric: took 13.9328657s to configureAuth
	I0528 15:26:35.925303   14720 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:26:35.926251   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:26:35.926251   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:40.420160   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:40.420160   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:40.436760   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:40.437540   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:40.437540   14720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:26:40.556271   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:26:40.556362   14720 buildroot.go:70] root file system type: tmpfs
	I0528 15:26:40.556496   14720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:26:40.556496   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:42.553713   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:42.553713   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:42.566120   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:44.985445   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:44.985445   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:45.001160   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:45.001925   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:45.001925   14720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:26:45.156257   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:26:45.156454   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:47.163834   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:47.163834   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:47.175756   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:49.576490   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:49.576490   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:49.596427   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:49.596427   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:49.596427   14720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:26:51.980270   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:26:51.980270   14720 machine.go:97] duration metric: took 44.0802217s to provisionDockerMachine
	I0528 15:26:51.980270   14720 start.go:293] postStartSetup for "multinode-517300" (driver="hyperv")
	I0528 15:26:51.980270   14720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:26:51.990863   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:26:51.990863   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:56.454286   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:56.465927   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:56.465927   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:26:56.569122   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5781418s)
	I0528 15:26:56.581366   14720 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:26:56.589288   14720 command_runner.go:130] > NAME=Buildroot
	I0528 15:26:56.589288   14720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:26:56.589288   14720 command_runner.go:130] > ID=buildroot
	I0528 15:26:56.589288   14720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:26:56.589288   14720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:26:56.589406   14720 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:26:56.589453   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:26:56.589945   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:26:56.590584   14720 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:26:56.590584   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:26:56.603804   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:26:56.621097   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:26:56.663288   14720 start.go:296] duration metric: took 4.6830028s for postStartSetup
	I0528 15:26:56.663288   14720 fix.go:56] duration metric: took 1m26.0981766s for fixHost
	I0528 15:26:56.663288   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:58.778704   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:58.789832   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:58.790052   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:01.359420   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:01.363286   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:01.368997   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:27:01.370106   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:27:01.370175   14720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 15:27:01.501827   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716935221.502949732
	
	I0528 15:27:01.501827   14720 fix.go:216] guest clock: 1716935221.502949732
	I0528 15:27:01.501827   14720 fix.go:229] Guest: 2024-05-28 15:27:01.502949732 -0700 PDT Remote: 2024-05-28 15:26:56.6632883 -0700 PDT m=+91.953439201 (delta=4.839661432s)
	I0528 15:27:01.501827   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:05.993477   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:05.993477   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:05.999949   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:27:06.000095   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:27:06.000095   14720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716935221
	I0528 15:27:06.136645   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:27:01 UTC 2024
	
	I0528 15:27:06.136761   14720 fix.go:236] clock set: Tue May 28 22:27:01 UTC 2024
	 (err=<nil>)
	I0528 15:27:06.136761   14720 start.go:83] releasing machines lock for "multinode-517300", held for 1m35.5721784s
	I0528 15:27:06.136761   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:08.172243   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:08.172243   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:08.184445   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:10.623498   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:10.634300   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:10.639294   14720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:27:10.639462   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:10.649333   14720 ssh_runner.go:195] Run: cat /version.json
	I0528 15:27:10.649333   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:12.810282   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:12.810282   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:15.486669   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:15.486888   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:15.486888   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:27:15.499652   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:15.499652   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:15.505114   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:27:15.797854   14720 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:27:15.797920   14720 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1585539s)
	I0528 15:27:15.797920   14720 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 15:27:15.797920   14720 ssh_runner.go:235] Completed: cat /version.json: (5.1485706s)
	I0528 15:27:15.810189   14720 ssh_runner.go:195] Run: systemctl --version
	I0528 15:27:15.820078   14720 command_runner.go:130] > systemd 252 (252)
	I0528 15:27:15.820078   14720 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 15:27:15.832074   14720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:27:15.842892   14720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 15:27:15.844678   14720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:27:15.856213   14720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:27:15.885325   14720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:27:15.885325   14720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:27:15.885325   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:27:15.885325   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:27:15.919459   14720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:27:15.932235   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:27:15.963657   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:27:15.983429   14720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:27:16.000076   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:27:16.033662   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:27:16.070882   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:27:16.104322   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:27:16.137991   14720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:27:16.172248   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:27:16.205462   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:27:16.237391   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:27:16.269287   14720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:27:16.286547   14720 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:27:16.299631   14720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:27:16.329340   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:16.528272   14720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:27:16.557346   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:27:16.569543   14720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:27:16.593896   14720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:27:16.593964   14720 command_runner.go:130] > [Unit]
	I0528 15:27:16.593964   14720 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:27:16.593964   14720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:27:16.593964   14720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:27:16.593964   14720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:27:16.593964   14720 command_runner.go:130] > StartLimitBurst=3
	I0528 15:27:16.594042   14720 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:27:16.594042   14720 command_runner.go:130] > [Service]
	I0528 15:27:16.594042   14720 command_runner.go:130] > Type=notify
	I0528 15:27:16.594042   14720 command_runner.go:130] > Restart=on-failure
	I0528 15:27:16.594100   14720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:27:16.594119   14720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:27:16.594119   14720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:27:16.594119   14720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:27:16.594119   14720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:27:16.594181   14720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:27:16.594235   14720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:27:16.594235   14720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:27:16.594235   14720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecStart=
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:27:16.594235   14720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitCORE=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:27:16.594235   14720 command_runner.go:130] > TasksMax=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:27:16.594235   14720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:27:16.594235   14720 command_runner.go:130] > Delegate=yes
	I0528 15:27:16.594235   14720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:27:16.594235   14720 command_runner.go:130] > KillMode=process
	I0528 15:27:16.594235   14720 command_runner.go:130] > [Install]
	I0528 15:27:16.594235   14720 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:27:16.606927   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:27:16.640646   14720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:27:16.683361   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:27:16.719062   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:27:16.753625   14720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:27:16.817746   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:27:16.844377   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:27:16.876924   14720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:27:16.889474   14720 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:27:16.896203   14720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:27:16.907299   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:27:16.925500   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:27:16.970096   14720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:27:17.167851   14720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:27:17.349334   14720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:27:17.349554   14720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:27:17.394177   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:17.587080   14720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:27:20.179527   14720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5923022s)
	I0528 15:27:20.194549   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 15:27:20.228650   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:27:20.262024   14720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 15:27:20.451327   14720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 15:27:20.653768   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:20.852229   14720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 15:27:20.894838   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:27:20.931900   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:21.119151   14720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 15:27:21.227604   14720 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 15:27:21.240570   14720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 15:27:21.248372   14720 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0528 15:27:21.248907   14720 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 15:27:21.248907   14720 command_runner.go:130] > Device: 0,22	Inode: 866         Links: 1
	I0528 15:27:21.248907   14720 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0528 15:27:21.248907   14720 command_runner.go:130] > Access: 2024-05-28 22:27:21.146794483 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] > Modify: 2024-05-28 22:27:21.146794483 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] > Change: 2024-05-28 22:27:21.149794500 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] >  Birth: -
	I0528 15:27:21.248991   14720 start.go:562] Will wait 60s for crictl version
	I0528 15:27:21.259456   14720 ssh_runner.go:195] Run: which crictl
	I0528 15:27:21.264946   14720 command_runner.go:130] > /usr/bin/crictl
	I0528 15:27:21.276914   14720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 15:27:21.330043   14720 command_runner.go:130] > Version:  0.1.0
	I0528 15:27:21.330043   14720 command_runner.go:130] > RuntimeName:  docker
	I0528 15:27:21.330043   14720 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0528 15:27:21.330179   14720 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 15:27:21.330224   14720 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 15:27:21.340066   14720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:27:21.368308   14720 command_runner.go:130] > 26.0.2
	I0528 15:27:21.377922   14720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:27:21.404736   14720 command_runner.go:130] > 26.0.2
	I0528 15:27:21.409882   14720 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 15:27:21.409882   14720 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 15:27:21.418820   14720 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 15:27:21.418820   14720 ip.go:210] interface addr: 172.18.208.1/20
	I0528 15:27:21.433322   14720 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 15:27:21.439060   14720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:27:21.457633   14720 kubeadm.go:877] updating cluster {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 15:27:21.457633   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:27:21.468413   14720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 15:27:21.492932   14720 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:27:21.492932   14720 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0528 15:27:21.492932   14720 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0528 15:27:21.492932   14720 docker.go:615] Images already preloaded, skipping extraction
	I0528 15:27:21.502719   14720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:27:21.525019   14720 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 15:27:21.526198   14720 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 15:27:21.526198   14720 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:27:21.526256   14720 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0528 15:27:21.526435   14720 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0528 15:27:21.526551   14720 cache_images.go:84] Images are preloaded, skipping loading
	I0528 15:27:21.526632   14720 kubeadm.go:928] updating node { 172.18.212.160 8443 v1.30.1 docker true true} ...
	I0528 15:27:21.526897   14720 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-517300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.212.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 15:27:21.538037   14720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 15:27:21.573821   14720 command_runner.go:130] > cgroupfs
	I0528 15:27:21.574380   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:27:21.574380   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:27:21.574380   14720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 15:27:21.574380   14720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.212.160 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-517300 NodeName:multinode-517300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.212.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.212.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 15:27:21.575099   14720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.212.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-517300"
	  kubeletExtraArgs:
	    node-ip: 172.18.212.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 15:27:21.587378   14720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubeadm
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubectl
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubelet
	I0528 15:27:21.603658   14720 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 15:27:21.617533   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 15:27:21.634311   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0528 15:27:21.662953   14720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 15:27:21.692372   14720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0528 15:27:21.739012   14720 ssh_runner.go:195] Run: grep 172.18.212.160	control-plane.minikube.internal$ /etc/hosts
	I0528 15:27:21.745738   14720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.212.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:27:21.779248   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:21.957742   14720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:27:21.985257   14720 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300 for IP: 172.18.212.160
	I0528 15:27:21.985339   14720 certs.go:194] generating shared ca certs ...
	I0528 15:27:21.985339   14720 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:21.985540   14720 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 15:27:21.986411   14720 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 15:27:21.986681   14720 certs.go:256] generating profile certs ...
	I0528 15:27:21.986817   14720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.key
	I0528 15:27:21.987488   14720 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f
	I0528 15:27:21.987681   14720 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.212.160]
	I0528 15:27:22.246263   14720 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f ...
	I0528 15:27:22.246263   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f: {Name:mkbcc6d34573e723fd93d9d4ee6bd3f7302a4991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:22.247723   14720 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f ...
	I0528 15:27:22.247723   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f: {Name:mk0c16934ffa8089018c3bc9057a82dd25367ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:22.248883   14720 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt
	I0528 15:27:22.261568   14720 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key
	I0528 15:27:22.277986   14720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key
	I0528 15:27:22.277986   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 15:27:22.277986   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 15:27:22.278524   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 15:27:22.278857   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 15:27:22.279536   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 15:27:22.280542   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 15:27:22.280772   14720 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 15:27:22.280772   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 15:27:22.280772   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 15:27:22.281451   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 15:27:22.281723   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 15:27:22.282330   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 15:27:22.282566   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.282690   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.282893   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:22.284074   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 15:27:22.332540   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 15:27:22.373052   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 15:27:22.420377   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 15:27:22.469809   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 15:27:22.513606   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 15:27:22.558894   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 15:27:22.594293   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 15:27:22.643957   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 15:27:22.685075   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 15:27:22.726819   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 15:27:22.781564   14720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 15:27:22.827107   14720 ssh_runner.go:195] Run: openssl version
	I0528 15:27:22.834855   14720 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 15:27:22.841487   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 15:27:22.876983   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.879767   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.883887   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.895842   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.903472   14720 command_runner.go:130] > 51391683
	I0528 15:27:22.916557   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 15:27:22.948338   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 15:27:22.980126   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.993518   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.993518   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:27:23.005002   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 15:27:23.015600   14720 command_runner.go:130] > 3ec20f2e
	I0528 15:27:23.026986   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 15:27:23.063789   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 15:27:23.094341   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.101739   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.101739   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.113610   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.121874   14720 command_runner.go:130] > b5213941
	I0528 15:27:23.134903   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 15:27:23.164389   14720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:27:23.170788   14720 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:27:23.175334   14720 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0528 15:27:23.175397   14720 command_runner.go:130] > Device: 8,1	Inode: 8384338     Links: 1
	I0528 15:27:23.175397   14720 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 15:27:23.175397   14720 command_runner.go:130] > Access: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] > Modify: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] > Change: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] >  Birth: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.188388   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 15:27:23.197867   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.210811   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 15:27:23.219959   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.232725   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 15:27:23.243412   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.255820   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 15:27:23.266243   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.279379   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 15:27:23.289775   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.301434   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 15:27:23.311872   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.312006   14720 kubeadm.go:391] StartCluster: {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:27:23.323096   14720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 15:27:23.357235   14720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/minikube/etcd:
	I0528 15:27:23.376407   14720 command_runner.go:130] > member
	W0528 15:27:23.376407   14720 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 15:27:23.376407   14720 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 15:27:23.376407   14720 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 15:27:23.388054   14720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 15:27:23.406450   14720 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 15:27:23.407846   14720 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-517300" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:23.408386   14720 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-517300" cluster setting kubeconfig missing "multinode-517300" context setting]
	I0528 15:27:23.409097   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:23.425000   14720 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:23.425807   14720 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.212.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:27:23.427163   14720 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 15:27:23.442562   14720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 15:27:23.463047   14720 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0528 15:27:23.463129   14720 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0528 15:27:23.463129   14720 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0528 15:27:23.463129   14720 command_runner.go:130] >  kind: InitConfiguration
	I0528 15:27:23.463129   14720 command_runner.go:130] >  localAPIEndpoint:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -  advertiseAddress: 172.18.214.123
	I0528 15:27:23.463129   14720 command_runner.go:130] > +  advertiseAddress: 172.18.212.160
	I0528 15:27:23.463129   14720 command_runner.go:130] >    bindPort: 8443
	I0528 15:27:23.463129   14720 command_runner.go:130] >  bootstrapTokens:
	I0528 15:27:23.463129   14720 command_runner.go:130] >    - groups:
	I0528 15:27:23.463129   14720 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0528 15:27:23.463129   14720 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0528 15:27:23.463129   14720 command_runner.go:130] >    name: "multinode-517300"
	I0528 15:27:23.463129   14720 command_runner.go:130] >    kubeletExtraArgs:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -    node-ip: 172.18.214.123
	I0528 15:27:23.463129   14720 command_runner.go:130] > +    node-ip: 172.18.212.160
	I0528 15:27:23.463129   14720 command_runner.go:130] >    taints: []
	I0528 15:27:23.463129   14720 command_runner.go:130] >  ---
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0528 15:27:23.463129   14720 command_runner.go:130] >  kind: ClusterConfiguration
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiServer:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.18.214.123"]
	I0528 15:27:23.463129   14720 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	I0528 15:27:23.463129   14720 command_runner.go:130] >    extraArgs:
	I0528 15:27:23.463129   14720 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0528 15:27:23.463129   14720 command_runner.go:130] >  controllerManager:
	I0528 15:27:23.463129   14720 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.18.214.123
	+  advertiseAddress: 172.18.212.160
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-517300"
	   kubeletExtraArgs:
	-    node-ip: 172.18.214.123
	+    node-ip: 172.18.212.160
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.18.214.123"]
	+  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0528 15:27:23.463682   14720 kubeadm.go:1154] stopping kube-system containers ...
	I0528 15:27:23.472836   14720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 15:27:23.504241   14720 command_runner.go:130] > 46e6bf352b79
	I0528 15:27:23.504241   14720 command_runner.go:130] > 74720800aab4
	I0528 15:27:23.504241   14720 command_runner.go:130] > 879c412c7f45
	I0528 15:27:23.504241   14720 command_runner.go:130] > 666496aef7c5
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3799d49aa0d8
	I0528 15:27:23.504241   14720 command_runner.go:130] > ec6e090d8a58
	I0528 15:27:23.504241   14720 command_runner.go:130] > d1b2ab67f8e3
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3f800c86d1d4
	I0528 15:27:23.504241   14720 command_runner.go:130] > 1b70e592d21f
	I0528 15:27:23.504241   14720 command_runner.go:130] > b44c8d48ef93
	I0528 15:27:23.504241   14720 command_runner.go:130] > b21e4a41b2fe
	I0528 15:27:23.504241   14720 command_runner.go:130] > d79040b216c8
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3b2820676d5c
	I0528 15:27:23.504241   14720 command_runner.go:130] > 453e3ca7e2c8
	I0528 15:27:23.504241   14720 command_runner.go:130] > 74bfa22b48f3
	I0528 15:27:23.504241   14720 command_runner.go:130] > de5c7dce3683
	I0528 15:27:23.504241   14720 docker.go:483] Stopping containers: [46e6bf352b79 74720800aab4 879c412c7f45 666496aef7c5 3799d49aa0d8 ec6e090d8a58 d1b2ab67f8e3 3f800c86d1d4 1b70e592d21f b44c8d48ef93 b21e4a41b2fe d79040b216c8 3b2820676d5c 453e3ca7e2c8 74bfa22b48f3 de5c7dce3683]
	I0528 15:27:23.514645   14720 ssh_runner.go:195] Run: docker stop 46e6bf352b79 74720800aab4 879c412c7f45 666496aef7c5 3799d49aa0d8 ec6e090d8a58 d1b2ab67f8e3 3f800c86d1d4 1b70e592d21f b44c8d48ef93 b21e4a41b2fe d79040b216c8 3b2820676d5c 453e3ca7e2c8 74bfa22b48f3 de5c7dce3683
	I0528 15:27:23.551125   14720 command_runner.go:130] > 46e6bf352b79
	I0528 15:27:23.551125   14720 command_runner.go:130] > 74720800aab4
	I0528 15:27:23.551125   14720 command_runner.go:130] > 879c412c7f45
	I0528 15:27:23.551125   14720 command_runner.go:130] > 666496aef7c5
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3799d49aa0d8
	I0528 15:27:23.551125   14720 command_runner.go:130] > ec6e090d8a58
	I0528 15:27:23.551125   14720 command_runner.go:130] > d1b2ab67f8e3
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3f800c86d1d4
	I0528 15:27:23.551125   14720 command_runner.go:130] > 1b70e592d21f
	I0528 15:27:23.551125   14720 command_runner.go:130] > b44c8d48ef93
	I0528 15:27:23.551125   14720 command_runner.go:130] > b21e4a41b2fe
	I0528 15:27:23.551125   14720 command_runner.go:130] > d79040b216c8
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3b2820676d5c
	I0528 15:27:23.551125   14720 command_runner.go:130] > 453e3ca7e2c8
	I0528 15:27:23.551125   14720 command_runner.go:130] > 74bfa22b48f3
	I0528 15:27:23.551125   14720 command_runner.go:130] > de5c7dce3683
	I0528 15:27:23.565456   14720 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 15:27:23.612308   14720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:27:23.623269   14720 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:27:23.623269   14720 kubeadm.go:156] found existing configuration files:
	
	I0528 15:27:23.645603   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 15:27:23.653486   14720 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:27:23.653486   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:27:23.677184   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 15:27:23.708181   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 15:27:23.725520   14720 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:27:23.725665   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:27:23.737893   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 15:27:23.770031   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 15:27:23.788741   14720 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:27:23.789765   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:27:23.802390   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 15:27:23.834836   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 15:27:23.851552   14720 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:27:23.852076   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:27:23.866624   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 15:27:23.897304   14720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 15:27:23.917676   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:24.218032   14720 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0528 15:27:24.218198   14720 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0528 15:27:24.218198   14720 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0528 15:27:24.218259   14720 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0528 15:27:24.218259   14720 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0528 15:27:24.218335   14720 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 15:27:24.218405   14720 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 15:27:24.218405   14720 command_runner.go:130] > [certs] Using the existing "sa" key
	I0528 15:27:24.218496   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 15:27:25.450722   14720 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 15:27:25.450722   14720 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 15:27:25.450778   14720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2321667s)
	I0528 15:27:25.450817   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.745532   14720 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 15:27:25.745670   14720 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 15:27:25.745670   14720 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0528 15:27:25.746031   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 15:27:25.834542   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.905559   14720 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 15:27:25.926493   14720 api_server.go:52] waiting for apiserver process to appear ...
	I0528 15:27:25.941311   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:26.459149   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:26.941768   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.451066   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.950978   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.974483   14720 command_runner.go:130] > 1882
	I0528 15:27:27.974720   14720 api_server.go:72] duration metric: took 2.0483137s to wait for apiserver process to appear ...
	I0528 15:27:27.974762   14720 api_server.go:88] waiting for apiserver healthz status ...
	I0528 15:27:27.974865   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.020687   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 15:27:31.024687   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 15:27:31.024687   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.077998   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 15:27:31.080483   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 15:27:31.476549   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.484913   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:31.484913   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:31.978263   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.986961   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:31.986961   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:32.491361   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:32.503272   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:32.503722   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:32.984021   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:32.990646   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 200:
	ok
	I0528 15:27:32.993020   14720 round_trippers.go:463] GET https://172.18.212.160:8443/version
	I0528 15:27:32.993113   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:32.993113   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:32.993113   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:33.011505   14720 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0528 15:27:33.011505   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:33.011505   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Content-Length: 263
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:33 GMT
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Audit-Id: a1ba0a68-6bc5-4f4b-96e3-5dc461a2c929
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:33.011505   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:33.011505   14720 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 15:27:33.012975   14720 api_server.go:141] control plane version: v1.30.1
	I0528 15:27:33.012975   14720 api_server.go:131] duration metric: took 5.0381973s to wait for apiserver health ...
	I0528 15:27:33.012975   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:27:33.012975   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:27:33.015254   14720 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 15:27:33.031290   14720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 15:27:33.035618   14720 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0528 15:27:33.035618   14720 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0528 15:27:33.035618   14720 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0528 15:27:33.035618   14720 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 15:27:33.035618   14720 command_runner.go:130] > Access: 2024-05-28 22:25:59.797580600 +0000
	I0528 15:27:33.040113   14720 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0528 15:27:33.040113   14720 command_runner.go:130] > Change: 2024-05-28 22:25:48.111000000 +0000
	I0528 15:27:33.040208   14720 command_runner.go:130] >  Birth: -
	I0528 15:27:33.040208   14720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 15:27:33.040208   14720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 15:27:33.103901   14720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 15:27:34.073012   14720 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0528 15:27:34.073012   14720 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0528 15:27:34.075971   14720 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0528 15:27:34.075971   14720 command_runner.go:130] > daemonset.apps/kindnet configured
	I0528 15:27:34.076073   14720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 15:27:34.076073   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:27:34.076073   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.076073   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.076073   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.079570   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.079570   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.082329   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.082473   14720 round_trippers.go:580]     Audit-Id: 7e43b862-5a15-44d4-ac27-c8fe91d353f0
	I0528 15:27:34.082538   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.082538   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.082538   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.082538   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.084523   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87768 chars]
	I0528 15:27:34.091115   14720 system_pods.go:59] 12 kube-system pods found
	I0528 15:27:34.091115   14720 system_pods.go:61] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 15:27:34.091115   14720 system_pods.go:61] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 15:27:34.091821   14720 system_pods.go:61] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:27:34.091821   14720 system_pods.go:61] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 15:27:34.091863   14720 system_pods.go:61] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:27:34.091863   14720 system_pods.go:74] duration metric: took 15.7899ms to wait for pod list to return data ...
	I0528 15:27:34.091911   14720 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:27:34.091945   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes
	I0528 15:27:34.091945   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.091945   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.091945   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.099817   14720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:27:34.099817   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Audit-Id: 1b9bbc58-d068-4e00-82da-468773958280
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.099817   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.099817   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.100497   14720 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:105] duration metric: took 10.0696ms to run NodePressure ...
	I0528 15:27:34.102015   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:34.466299   14720 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0528 15:27:34.466299   14720 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0528 15:27:34.466299   14720 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 15:27:34.466299   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0528 15:27:34.466299   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.466299   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.466299   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.469138   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:34.469138   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.469138   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Audit-Id: 425dd875-9bad-4303-a729-d306569c59c8
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.471744   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.471744   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.472806   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1808"},"items":[{"metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1794","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0528 15:27:34.474547   14720 kubeadm.go:733] kubelet initialised
	I0528 15:27:34.474547   14720 kubeadm.go:734] duration metric: took 8.248ms waiting for restarted kubelet to initialise ...
	I0528 15:27:34.474547   14720 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:27:34.474547   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:27:34.474547   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.474547   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.474547   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.477838   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.477838   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.480585   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.480585   14720 round_trippers.go:580]     Audit-Id: 9dfe8821-f413-440e-8806-85297a862539
	I0528 15:27:34.480643   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.480643   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.480643   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.480643   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.483547   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1808"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87175 chars]
	I0528 15:27:34.491872   14720 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.491872   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:27:34.491872   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.491872   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.491872   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.492603   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.494508   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Audit-Id: 0e00f395-1f09-4893-935d-0a77e2cc4b01
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.494508   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.494560   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.494560   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.494588   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:27:34.495166   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.495166   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.495214   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.495214   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.498889   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.498889   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.498889   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Audit-Id: d6e40a21-05c9-4f94-a4db-3ea77a13b5d2
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.498889   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.499426   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.499996   14720 pod_ready.go:97] node "multinode-517300" hosting pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.500078   14720 pod_ready.go:81] duration metric: took 8.2057ms for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.500114   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.500146   14720 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.500284   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:27:34.500320   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.500353   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.500391   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.503216   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:34.503216   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.503216   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Audit-Id: f07b0c46-56b1-45e9-b2e6-e5e706aa183a
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.503216   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.503811   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1794","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0528 15:27:34.503964   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.503964   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.503964   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.503964   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.505955   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:34.507537   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.507615   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.507638   14720 round_trippers.go:580]     Audit-Id: 4f86d285-800d-42f2-90c0-56eec9ef49d0
	I0528 15:27:34.507671   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.507671   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.507671   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.507671   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.507671   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.508247   14720 pod_ready.go:97] node "multinode-517300" hosting pod "etcd-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.508364   14720 pod_ready.go:81] duration metric: took 8.181ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.508419   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "etcd-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.508419   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.508588   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:27:34.508638   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.508638   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.508684   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.516923   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:34.516923   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.517269   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Audit-Id: 8dac5e8b-a81d-43a6-af3c-1685cdc1c78a
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.517269   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.517481   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"ea194cef-cdf6-4ed7-b555-b57cc085b75b","resourceVersion":"1793","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.212.160:8443","kubernetes.io/config.hash":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.mirror":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.seen":"2024-05-28T22:27:25.898753713Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0528 15:27:34.517909   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.517909   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.517909   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.517909   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.518514   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.518514   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Audit-Id: 30bcf090-b451-455c-b6a3-0db724191a15
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.520747   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.520829   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.520829   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.520895   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.520895   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-apiserver-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.521473   14720 pod_ready.go:81] duration metric: took 12.407ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.521473   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-apiserver-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.521473   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.521626   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:27:34.521626   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.521626   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.521626   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.525123   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.525209   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.525209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Audit-Id: 052548c1-7d80-4f34-ae11-5f349868ae53
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.525284   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.525284   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.525582   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"1791","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0528 15:27:34.526181   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.526181   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.526181   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.526181   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.530889   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:34.530978   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.530978   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Audit-Id: 7d3a93ad-7d79-4d81-9927-e05894911667
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.531010   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.531433   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.531857   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-controller-manager-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.531857   14720 pod_ready.go:81] duration metric: took 10.3836ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.531857   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-controller-manager-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.531857   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.682530   14720 request.go:629] Waited for 150.3591ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:27:34.682717   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:27:34.682717   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.682717   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.682717   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.683424   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.688073   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.688073   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.688073   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.688073   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.688157   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.688157   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.688157   14720 round_trippers.go:580]     Audit-Id: e05260b3-db48-49db-9c10-416b56c14ef1
	I0528 15:27:34.688501   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ckbkv","generateName":"kube-proxy-","namespace":"kube-system","uid":"13ccca53-976a-4839-be94-d3960388ebfa","resourceVersion":"637","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0528 15:27:34.881799   14720 request.go:629] Waited for 192.2187ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:27:34.881971   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:27:34.881971   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.881971   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.881971   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.882517   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.882517   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Audit-Id: a857866b-1d02-4135-940e-0705630715ae
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.882517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.882517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.886086   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"1701","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0528 15:27:34.886499   14720 pod_ready.go:92] pod "kube-proxy-ckbkv" in "kube-system" namespace has status "Ready":"True"
	I0528 15:27:34.886499   14720 pod_ready.go:81] duration metric: took 354.6408ms for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.886499   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.091130   14720 request.go:629] Waited for 204.4087ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:27:35.091300   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:27:35.091300   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.091401   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.091401   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.092060   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.092060   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Audit-Id: 43e63484-78fd-4c48-adcc-f219ff5ea848
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.092060   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.092060   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.096018   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"1805","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0528 15:27:35.277694   14720 request.go:629] Waited for 180.6744ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:35.277790   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:35.277790   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.277790   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.277865   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.278464   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.282303   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.282303   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Audit-Id: 70336e55-252b-4706-bf32-560d4480ee8e
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.282501   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.282559   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.282559   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:35.283327   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-proxy-hbbq2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:35.283327   14720 pod_ready.go:81] duration metric: took 396.8268ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:35.283327   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-proxy-hbbq2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:35.283327   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.487350   14720 request.go:629] Waited for 203.8273ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:27:35.487450   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:27:35.487450   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.487450   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.487602   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.487879   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.491109   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Audit-Id: 217fe55e-412c-4811-82dc-f08dd4114e2a
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.491192   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.491192   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.491192   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfckx","generateName":"kube-proxy-","namespace":"kube-system","uid":"879a98da-431d-4c4d-878a-1c5d63948d55","resourceVersion":"1760","creationTimestamp":"2024-05-28T22:12:11Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:12:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0528 15:27:35.680952   14720 request.go:629] Waited for 188.7381ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:27:35.680952   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:27:35.681147   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.681147   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.681147   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.681782   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.686835   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Audit-Id: a015d3b0-97ab-485c-954a-ed77b00c30ab
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.686835   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.686835   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.687150   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m03","uid":"d1ead7ca-233f-4d89-9e59-97f5e8e47155","resourceVersion":"1772","creationTimestamp":"2024-05-28T22:23:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_23_04_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0528 15:27:35.687661   14720 pod_ready.go:97] node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:27:35.687719   14720 pod_ready.go:81] duration metric: took 404.3907ms for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:35.687777   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:27:35.687777   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.891425   14720 request.go:629] Waited for 203.1146ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:27:35.891425   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:27:35.891425   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.891425   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.891425   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.895423   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:35.895423   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Audit-Id: 32cce0c2-8ed4-4c84-990e-16605422b534
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.896946   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.896946   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.896985   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.897039   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"1787","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0528 15:27:36.091033   14720 request.go:629] Waited for 192.9616ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.091098   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.091098   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.091098   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.091098   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.091846   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:36.091846   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.091846   14720 round_trippers.go:580]     Audit-Id: 2e79c9ef-a6e1-4792-8ae5-59f847468f46
	I0528 15:27:36.091846   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.095000   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.095000   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.095000   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.095051   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.095459   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:36.095593   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-scheduler-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:36.095593   14720 pod_ready.go:81] duration metric: took 407.8151ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:36.095593   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-scheduler-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:36.095593   14720 pod_ready.go:38] duration metric: took 1.6210417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:27:36.095593   14720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 15:27:36.132156   14720 command_runner.go:130] > -16
	I0528 15:27:36.132402   14720 ops.go:34] apiserver oom_adj: -16
	I0528 15:27:36.132493   14720 kubeadm.go:591] duration metric: took 12.7560452s to restartPrimaryControlPlane
	I0528 15:27:36.132517   14720 kubeadm.go:393] duration metric: took 12.8204697s to StartCluster
	I0528 15:27:36.132517   14720 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:36.132517   14720 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:36.134100   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:36.135998   14720 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 15:27:36.136047   14720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 15:27:36.139721   14720 out.go:177] * Verifying Kubernetes components...
	I0528 15:27:36.144201   14720 out.go:177] * Enabled addons: 
	I0528 15:27:36.136539   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:27:36.149069   14720 addons.go:510] duration metric: took 13.0711ms for enable addons: enabled=[]
	I0528 15:27:36.163352   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:36.410324   14720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:27:36.442287   14720 node_ready.go:35] waiting up to 6m0s for node "multinode-517300" to be "Ready" ...
	I0528 15:27:36.442287   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.442287   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.442287   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.442287   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.443552   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:36.443552   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.443552   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.443552   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.447224   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Audit-Id: 251a79b3-0490-4aa7-8828-b37bb86532d3
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.447647   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:36.959190   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.959270   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.959311   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.959311   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.967674   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:36.967713   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Audit-Id: 715c32d9-3eb4-4b8f-84ee-2609f2c271d8
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.967713   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.967713   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.967713   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:37.454647   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:37.454718   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:37.454929   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:37.454997   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:37.458638   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:37.458748   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:37.458748   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:37.458748   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:37 GMT
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Audit-Id: 197fa7cf-8751-45c3-b791-e27c84bed472
	I0528 15:27:37.458748   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:37.954327   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:37.954565   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:37.954565   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:37.954565   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:37.955054   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:37.958504   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:37 GMT
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Audit-Id: ef53bf37-ab00-4d6c-a8bf-c5e1569d7a10
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:37.958504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:37.958504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:37.959976   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:38.454800   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:38.454903   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:38.454903   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:38.454963   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:38.455254   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:38.455254   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:38.455254   14720 round_trippers.go:580]     Audit-Id: a85fa80e-9572-4532-bc99-b09f17f78555
	I0528 15:27:38.455254   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:38.460129   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:38.460129   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:38.460129   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:38.460129   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:38 GMT
	I0528 15:27:38.461392   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:38.461915   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:38.951753   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:38.951753   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:38.951753   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:38.951753   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:38.952517   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:38.952517   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:38.952517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:38.952517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:38.952517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:38 GMT
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Audit-Id: 848b187b-fb14-49fa-ab39-aea8bc8ba119
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:38.956456   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:39.467133   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:39.467487   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:39.467487   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:39.467487   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:39.472197   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:39.472261   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Audit-Id: bca7dccc-9f77-4200-a77e-c436513dc483
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:39.472261   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:39.472261   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:39 GMT
	I0528 15:27:39.473011   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:39.953959   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:39.954034   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:39.954034   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:39.954123   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:39.960527   14720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:27:39.960527   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Audit-Id: 451e32f7-02ab-4014-b169-e5e590eaa1d1
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:39.960527   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:39.960527   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:39 GMT
	I0528 15:27:39.960527   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.443390   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:40.443453   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:40.443453   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:40.443453   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:40.448128   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:40.451281   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:40.451281   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:40.451281   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:40 GMT
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Audit-Id: 7ae23976-d1e3-4003-b671-71e5f0e83bb2
	I0528 15:27:40.451424   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.949349   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:40.949411   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:40.949411   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:40.949411   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:40.953215   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:40.953215   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:40.953215   14720 round_trippers.go:580]     Audit-Id: f6cd5edf-29fa-4509-8498-9fba1d6dfa5e
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:40.953310   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:40.953310   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:40 GMT
	I0528 15:27:40.953451   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.953451   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:41.455482   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:41.455482   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:41.455641   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:41.455641   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:41.460644   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:27:41.460644   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:41.460644   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:41 GMT
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Audit-Id: 0a4924fa-0d92-4b7d-a366-9bbea88d5abd
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:41.460644   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:41.460644   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:41.955971   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:41.955971   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:41.955971   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:41.955971   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:41.956508   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:41.959364   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:41.959364   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:41 GMT
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Audit-Id: a7994bd2-2220-4d48-b85f-52e33bbb7656
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:41.959364   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:41.959988   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.456382   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:42.456473   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:42.456473   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:42.456473   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:42.456750   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:42.460741   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:42.460741   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:42 GMT
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Audit-Id: 593ac8f7-465d-4c88-a5e2-7320dcf1b6ad
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:42.460741   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:42.460741   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.954889   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:42.954889   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:42.954994   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:42.954994   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:42.960774   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:27:42.960774   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:42.960774   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:42.960774   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:42 GMT
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Audit-Id: 3c60242a-14db-4e74-a7b7-6e420360f17f
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:42.961313   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.961451   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:43.445242   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:43.445320   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:43.445320   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:43.445412   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:43.450069   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:43.450069   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:43.450069   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:43.450069   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:43 GMT
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Audit-Id: dedf0da1-23d5-46f2-9706-fdca02c2768c
	I0528 15:27:43.450069   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:43.961409   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:43.961409   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:43.961409   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:43.961409   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:43.961957   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:43.965134   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:43.965134   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:43 GMT
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Audit-Id: 47fa7c0f-4bf6-4765-b854-b30d4518ad10
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:43.965134   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:43.965458   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:44.444390   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:44.444390   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:44.444390   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:44.444390   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:44.445122   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:44.445122   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:44.452322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:44.452322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:44 GMT
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Audit-Id: a7785b18-9084-4d27-a934-10119e682580
	I0528 15:27:44.452439   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:44.452894   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:44.952409   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:44.952479   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:44.952479   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:44.952479   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:44.952799   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:44.956600   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Audit-Id: 940fe5f9-cacb-4703-b89e-dd975cc01c8a
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:44.956600   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:44.956600   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:44 GMT
	I0528 15:27:44.956600   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:45.453212   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:45.453410   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:45.453410   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:45.453410   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:45.457727   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:45.457727   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:45.457727   14720 round_trippers.go:580]     Audit-Id: b0af8f69-d04c-4264-97e9-536e12f6cf0b
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:45.457814   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:45.457814   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:45 GMT
	I0528 15:27:45.457883   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:45.458621   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:45.950165   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:45.950165   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:45.950165   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:45.950165   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:45.954638   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:45.954638   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:45.954638   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:45.954638   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:45 GMT
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Audit-Id: caf3eef1-6d17-42b7-bd41-6e1a38864014
	I0528 15:27:45.954638   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:46.451742   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:46.451742   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:46.451742   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:46.451742   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:46.459797   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:46.459797   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:46 GMT
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Audit-Id: b3534156-99fd-4711-bb97-806a2ce8b8fa
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:46.459797   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:46.459797   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:46.459797   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:46.957621   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:46.957819   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:46.957819   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:46.957819   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:46.960880   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:46.960880   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:46.960880   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:46.960880   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:46 GMT
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Audit-Id: 68e67770-7e92-4dff-9be3-3c5353df8267
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:46.960880   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:47.457770   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:47.457862   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:47.457862   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:47.457862   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:47.458195   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:47.458195   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:47.458195   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:47.458195   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:47 GMT
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Audit-Id: aba32545-5831-46ed-92b8-de72a6a9ae67
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:47.462414   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:47.462861   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:47.952961   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:47.953052   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:47.953052   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:47.953052   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:47.953650   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:47.957838   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:47.957838   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:47.957838   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:47 GMT
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Audit-Id: 64543aac-6a4a-4b62-9d0c-b7adf9a3605e
	I0528 15:27:47.958024   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:48.451794   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:48.451794   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:48.451794   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:48.451794   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:48.452321   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:48.456187   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:48.456187   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:48.456187   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:48 GMT
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Audit-Id: b1ef798d-6e1c-4781-91fd-a01259334926
	I0528 15:27:48.456187   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:48.944576   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:48.944883   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:48.944999   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:48.945105   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:48.945906   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:48.949006   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:48.949101   14720 round_trippers.go:580]     Audit-Id: 1858afbe-a696-4a68-88fd-b8463dfb8c44
	I0528 15:27:48.949101   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:48.949171   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:48.949171   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:48.949171   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:48.949171   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:48 GMT
	I0528 15:27:48.949171   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.451885   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:49.451945   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:49.451981   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:49.451981   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:49.452288   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:49.452288   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:49.452288   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:49 GMT
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Audit-Id: 983131a5-6beb-4aee-8986-081f55b3c820
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:49.456267   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:49.456653   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.949600   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:49.949600   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:49.949685   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:49.949685   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:49.950495   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:49.953421   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:49.953564   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:49.953564   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:49 GMT
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Audit-Id: 74e6d44f-1637-484c-b0d0-142ed653e212
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:49.953787   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.954386   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:50.452543   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:50.452543   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:50.452543   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:50.452543   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:50.453349   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:50.456712   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:50.456712   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:50.456712   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:50 GMT
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Audit-Id: ee19d1a0-0e8b-41cc-a868-3bb152118c1f
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:50.456712   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:50.949352   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:50.949352   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:50.949352   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:50.949352   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:50.953504   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:50.953504   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:50 GMT
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Audit-Id: f8f5c1e5-40aa-42ae-8698-6c1f106e1202
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:50.953504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:50.953504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:50.953504   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.444558   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:51.444898   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:51.444955   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:51.444955   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:51.446078   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:51.448980   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Audit-Id: 77aedf58-1dcf-479e-a1d2-eb2207d585de
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:51.448980   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:51.448980   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:51 GMT
	I0528 15:27:51.449246   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.952091   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:51.952159   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:51.952159   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:51.952159   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:51.956323   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:51.956323   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:51.956323   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:51 GMT
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Audit-Id: 254d5dda-6b73-47c8-833c-b91dd1b0e27f
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:51.956323   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:51.956323   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.957186   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:52.456977   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:52.457058   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:52.457058   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:52.457058   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:52.461510   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:52.461510   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:52.461510   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:52.461510   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:52 GMT
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Audit-Id: 399b9815-96f6-4a68-996f-a9abf2f53511
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:52.461510   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:52.956024   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:52.956024   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:52.956024   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:52.956024   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:52.959953   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:52.960046   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:52.960046   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:52.960146   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:52.960203   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:52.960203   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:52.960203   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:52 GMT
	I0528 15:27:52.960203   14720 round_trippers.go:580]     Audit-Id: 3c7c1f4e-b7d9-4934-97a3-2f3f41575b25
	I0528 15:27:52.960203   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:53.456156   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:53.456156   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:53.456156   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:53.456156   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:53.456688   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:53.456688   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:53 GMT
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Audit-Id: 9cb106e2-c669-49f9-a5ed-562f337952e5
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:53.460955   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:53.460986   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:53.461395   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:53.945172   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:53.945239   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:53.945239   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:53.945239   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:53.945586   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:53.945586   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:53.945586   14720 round_trippers.go:580]     Audit-Id: 3a86c8d5-6cd1-42e6-b977-ff6b47529001
	I0528 15:27:53.945586   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:53.949291   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:53.949291   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:53.949291   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:53.949291   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:53 GMT
	I0528 15:27:53.949520   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:54.450825   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:54.450985   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:54.450985   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:54.450985   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:54.453857   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:54.453857   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:54.453857   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:54.453857   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:54.453857   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:54.455692   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:54.455692   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:54 GMT
	I0528 15:27:54.455692   14720 round_trippers.go:580]     Audit-Id: 32e727ab-8066-4902-ac9f-c93e9ff7b8fe
	I0528 15:27:54.455908   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:54.456733   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:54.945340   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:54.945427   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:54.945427   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:54.945427   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:54.946220   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:54.946220   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:54.949325   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:54 GMT
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Audit-Id: d559e8d4-5992-4df2-8c40-71c9b1e93552
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:54.949325   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:54.949498   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:55.456533   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:55.456533   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:55.456533   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:55.456533   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:55.457286   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:55.457286   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:55.457286   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:55.457286   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:55 GMT
	I0528 15:27:55.461209   14720 round_trippers.go:580]     Audit-Id: fce34905-dd01-4aa0-8892-91e2c7e307dd
	I0528 15:27:55.461209   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:55.461272   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:55.461272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:55.461722   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:55.944796   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:55.944867   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:55.944867   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:55.944867   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:55.945586   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:55.945586   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Audit-Id: f9acd967-ff75-48a3-8bbd-0f66304fe0da
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:55.945586   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:55.948794   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:55.948794   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:55 GMT
	I0528 15:27:55.949041   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:56.456221   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:56.456221   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:56.456221   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:56.456221   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:56.459611   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:56.459777   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Audit-Id: 17e0f196-8b0c-4c47-95ae-3ec8b14191d3
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:56.459845   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:56.459845   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:56 GMT
	I0528 15:27:56.459845   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:56.460919   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:56.958243   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:56.958243   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:56.958243   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:56.958243   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:56.959385   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:56.962080   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:56 GMT
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Audit-Id: 4942531e-a065-4650-8e32-ccefc8c2b682
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:56.962080   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:56.962080   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:56.962383   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:57.447919   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:57.447919   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:57.447919   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:57.447919   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:57.452666   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:57.452697   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Audit-Id: 7dbd79e3-a287-46fc-b0ba-765326d0ddde
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:57.452697   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:57.452697   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:57 GMT
	I0528 15:27:57.452697   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:57.955696   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:57.955941   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:57.956079   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:57.956079   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:57.956958   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:57.956958   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:57.956958   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:57.956958   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:57 GMT
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Audit-Id: c89cb395-be00-4e01-9545-606f77dc14ff
	I0528 15:27:57.961386   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:57.961906   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.454144   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:58.454236   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:58.454236   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:58.454236   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:58.454546   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:58.458691   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:58.458691   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:58 GMT
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Audit-Id: e0d78bb5-0f9b-4611-ad0a-0fdb43c1076e
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:58.458691   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:58.459024   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.948304   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:58.948304   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:58.948547   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:58.948547   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:58.948824   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:58.948824   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:58.948824   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:58.948824   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:58 GMT
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Audit-Id: b4679ff2-4caa-4b2e-a1b0-ac87e62994d4
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:58.952480   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.953005   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:59.456231   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:59.456384   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:59.456384   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:59.456494   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:59.456768   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:59.456768   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:59.460551   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:59.460551   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:59 GMT
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Audit-Id: d3ed8cff-1567-4b14-9d6f-53fb84589e9f
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:59.460886   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:59.946828   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:59.946828   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:59.946828   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:59.946828   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:59.947397   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:59.951119   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:59.951119   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:59.951119   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:59.951119   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:59 GMT
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Audit-Id: c876420b-650e-4b1f-b0ea-da58d00905d7
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:59.951346   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.451652   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:00.451652   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:00.451652   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:00.451652   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:00.456929   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:00.456929   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:00.456929   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:00.456929   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:00 GMT
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Audit-Id: fe470701-d583-4056-b922-4005c0b4b8a7
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:00.456929   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.952828   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:00.952828   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:00.952828   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:00.952828   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:00.957154   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:00.957277   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Audit-Id: ddd02c78-f652-40d2-b88c-f2ef1685cba9
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:00.957277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:00.957277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:00.957358   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:00 GMT
	I0528 15:28:00.957717   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.958907   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:28:01.454700   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.454700   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.454700   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.454700   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.455337   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:01.458486   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Audit-Id: 025b0102-bd09-4348-9e38-89c1212c4c40
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.458486   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.458486   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.458789   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:01.957545   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.957545   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.957545   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.957545   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.958107   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:01.958107   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.958107   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.961347   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Audit-Id: 93bd0d24-3068-4c24-a48c-291553f8eaeb
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.961602   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:01.962322   14720 node_ready.go:49] node "multinode-517300" has status "Ready":"True"
	I0528 15:28:01.962322   14720 node_ready.go:38] duration metric: took 25.5199538s for node "multinode-517300" to be "Ready" ...
	I0528 15:28:01.962426   14720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:28:01.962495   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:01.962495   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.962495   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.962495   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.970272   14720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:28:01.970272   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.970272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.970272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Audit-Id: 87394050-2e8d-4c4a-bf35-09b2561ef264
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.972381   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1930"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86201 chars]
	I0528 15:28:01.976024   14720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:01.976024   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:01.976024   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.976024   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.976024   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.977711   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:01.977711   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.977711   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.977711   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.977711   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Audit-Id: 31f320bd-bbac-4c19-97c2-ba274c967dde
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.979587   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:01.980306   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.980306   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.980306   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.980360   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.983061   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:01.983061   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.983061   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.983061   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Audit-Id: 7fe11c65-72a8-4c24-801c-c8e65c621d0d
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.983756   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:02.489129   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:02.489217   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.489217   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.489217   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.495307   14720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:28:02.495540   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.495540   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.495540   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.495540   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.495540   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.495618   14720 round_trippers.go:580]     Audit-Id: 3e7c25a2-6d4c-4e0e-973b-76acb53d1095
	I0528 15:28:02.495618   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.495768   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:02.496662   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:02.496662   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.496662   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.496662   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.499793   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:02.499793   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Audit-Id: b9a24684-2f3b-43d2-b4dd-0ed26b942271
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.499793   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.499793   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.499793   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:02.982747   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:02.982849   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.982849   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.982849   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.983158   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:02.987363   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.987363   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.987537   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Audit-Id: ee5483bf-0062-47a7-b92a-8ae7f98fd5fd
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.988250   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:02.991204   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:02.991204   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.991204   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.991204   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.994582   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:02.994582   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Audit-Id: 1d082699-9f1d-4d18-a6ac-100679ddfde1
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.994686   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.994686   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.994832   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:03.483838   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:03.483838   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.483838   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.483838   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.484498   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:03.488223   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Audit-Id: 2a15be28-c745-4cfd-a30a-90ee85f9370b
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.488223   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.488223   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.488223   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:03.489486   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:03.489486   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.489486   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.489486   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.494818   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:28:03.494818   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Audit-Id: 5edd5cc1-f3a8-4e76-8f39-a75718835493
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.494818   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.494818   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.495538   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:03.982209   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:03.982306   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.982306   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.982306   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.984248   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:03.984248   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.984248   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.984248   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Audit-Id: 0fa3868f-18de-4ffb-8bf3-40a96aa8f058
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.984248   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:03.988246   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:03.988307   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.988307   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.988307   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.991194   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:03.991833   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.991833   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.991860   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.991860   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Audit-Id: 736907c9-4aef-421f-9741-a96dd58916b2
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.991860   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:03.992731   14720 pod_ready.go:102] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"False"
	I0528 15:28:04.486986   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:04.487057   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.487057   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.487142   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.489207   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:04.490447   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.490526   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.490526   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.490555   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.490555   14720 round_trippers.go:580]     Audit-Id: 5068e956-fa86-4dba-8763-97012d1e7e26
	I0528 15:28:04.491622   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.491622   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.492158   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:04.495241   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:04.495241   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.495241   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.495241   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.499221   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:04.499221   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Audit-Id: 62578849-7a3c-4767-b721-f3c3f77a2112
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.499221   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.499221   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.499928   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:04.991384   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:04.991384   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.991384   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.991384   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.995381   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:04.995753   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Audit-Id: c75d45ee-f3d4-48a9-9d04-b5e3c72e80be
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.995753   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.995753   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.995753   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:04.996505   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:04.996505   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.996505   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.997081   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.997840   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:04.997840   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.997840   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Audit-Id: 05f655ee-1e3a-44bb-895e-adbd613db33a
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.997840   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.004152   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:05.488639   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:05.488639   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.488639   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.488639   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.493306   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:05.493306   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.493306   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.493306   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Audit-Id: 3eb465ce-f8f9-4708-a76a-ea583b781954
	I0528 15:28:05.493306   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:05.494503   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:05.494605   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.494605   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.494605   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.497854   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:05.497927   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.497927   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.497927   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Audit-Id: 6cc7430b-128a-4666-9b0b-ebf414bb0c27
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.497927   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:05.976975   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:05.976975   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.977067   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.977067   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.978533   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:05.982229   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Audit-Id: 70187c41-5d68-4e8c-bcbb-b498ded7633d
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.982277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.982322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.982489   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:05.983324   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:05.983355   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.983397   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.983397   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.987821   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:05.987821   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Audit-Id: 20616c10-8486-4bb2-b6da-eedecb2ab45f
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.987821   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.987821   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.988352   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.481749   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:06.481749   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.481749   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.481749   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.482461   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.487112   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Audit-Id: 7a2ce5a6-20ec-4494-98f9-9373ad42872c
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.487112   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.487220   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.487220   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.487636   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0528 15:28:06.488361   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.488361   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.488361   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.488361   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.491598   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.491598   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.491598   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.491598   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Audit-Id: 45df517a-f3d4-46c1-8f8a-483ad7c50645
	I0528 15:28:06.491598   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.492218   14720 pod_ready.go:92] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.492218   14720 pod_ready.go:81] duration metric: took 4.5161786s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.492218   14720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.492218   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:28:06.492218   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.492218   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.492218   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.493424   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:06.493424   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.493424   14720 round_trippers.go:580]     Audit-Id: 8fba314b-f414-489d-bd2b-71a457d11a03
	I0528 15:28:06.493424   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.496181   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.496181   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.496181   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.496181   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.496259   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1909","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0528 15:28:06.496861   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.496861   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.496861   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.497076   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.498816   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:06.500372   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.500372   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.500372   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Audit-Id: 41f8a032-b820-4712-bf27-88390bc28431
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.500372   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.500994   14720 pod_ready.go:92] pod "etcd-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.500994   14720 pod_ready.go:81] duration metric: took 8.776ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.500994   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.500994   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:28:06.500994   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.500994   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.500994   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.503517   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.503517   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Audit-Id: 1b2b8250-37bb-424c-a681-d95953edc28a
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.504301   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.504301   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.505183   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"ea194cef-cdf6-4ed7-b555-b57cc085b75b","resourceVersion":"1875","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.212.160:8443","kubernetes.io/config.hash":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.mirror":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.seen":"2024-05-28T22:27:25.898753713Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0528 15:28:06.505795   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.505898   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.505898   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.505898   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.508209   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.508209   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.508209   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.508209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.508209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.508549   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.508549   14720 round_trippers.go:580]     Audit-Id: 2027147e-22f9-4e66-a787-e0e9b2592817
	I0528 15:28:06.508667   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.508855   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.509386   14720 pod_ready.go:92] pod "kube-apiserver-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.509386   14720 pod_ready.go:81] duration metric: took 8.3922ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.509386   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.509584   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:28:06.509584   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.509584   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.509584   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.510249   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.512542   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.512542   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Audit-Id: 8b24dd10-3675-4e12-99bf-9b60631fa286
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.512542   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.512542   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"1884","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0528 15:28:06.513286   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.513286   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.513286   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.513286   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.516479   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.516479   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.516479   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.516479   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Audit-Id: 931624ed-d1a5-4639-83e3-54f88d9a8ade
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.516479   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.517246   14720 pod_ready.go:92] pod "kube-controller-manager-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.517246   14720 pod_ready.go:81] duration metric: took 7.8599ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.517246   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.517246   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:28:06.517246   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.517246   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.517246   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.519807   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.520151   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.520151   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Audit-Id: 064fb44e-0ef8-4585-9835-47707012d9e5
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.520151   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.520151   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ckbkv","generateName":"kube-proxy-","namespace":"kube-system","uid":"13ccca53-976a-4839-be94-d3960388ebfa","resourceVersion":"637","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0528 15:28:06.520925   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:28:06.520925   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.520925   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.520925   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.523877   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.523877   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Audit-Id: 2c3704d8-7e60-4ad8-a57c-8308e8cdd642
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.524022   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.524022   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.524022   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"1701","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0528 15:28:06.525492   14720 pod_ready.go:92] pod "kube-proxy-ckbkv" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.525492   14720 pod_ready.go:81] duration metric: took 8.2459ms for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.525492   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.691179   14720 request.go:629] Waited for 165.4513ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:28:06.691268   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:28:06.691268   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.691268   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.691268   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.691814   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.694869   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.694869   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Audit-Id: 08fae60a-143b-4fff-a79f-9c9e48d62604
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.694869   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.695000   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"1805","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0528 15:28:06.884532   14720 request.go:629] Waited for 188.5758ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.884669   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.884669   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.884669   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.884669   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.885096   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.888356   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.888356   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.888356   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Audit-Id: 50ba4903-cae3-4d8c-b4db-03d93d859617
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.888599   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.889278   14720 pod_ready.go:92] pod "kube-proxy-hbbq2" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.889278   14720 pod_ready.go:81] duration metric: took 363.7854ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.889278   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.085806   14720 request.go:629] Waited for 196.527ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:28:07.086192   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:28:07.086192   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.086192   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.086192   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.087022   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.087022   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Audit-Id: 22a078f0-8a82-48c5-a0c5-9acb3ff273f3
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.091205   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.091205   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.091413   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfckx","generateName":"kube-proxy-","namespace":"kube-system","uid":"879a98da-431d-4c4d-878a-1c5d63948d55","resourceVersion":"1760","creationTimestamp":"2024-05-28T22:12:11Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:12:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0528 15:28:07.287982   14720 request.go:629] Waited for 195.6903ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:28:07.287982   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:28:07.288281   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.288281   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.288353   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.288809   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.292040   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.292040   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.292040   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.292040   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.292335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.292385   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.292385   14720 round_trippers.go:580]     Audit-Id: 2d994222-dfe5-471b-b906-f74889484a1b
	I0528 15:28:07.292720   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m03","uid":"d1ead7ca-233f-4d89-9e59-97f5e8e47155","resourceVersion":"1923","creationTimestamp":"2024-05-28T22:23:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_23_04_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0528 15:28:07.293776   14720 pod_ready.go:97] node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:28:07.293869   14720 pod_ready.go:81] duration metric: took 404.5891ms for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	E0528 15:28:07.293869   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:28:07.293869   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.492874   14720 request.go:629] Waited for 198.722ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:28:07.493329   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:28:07.493372   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.493372   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.493372   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.494335   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.494335   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.494335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.494335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Audit-Id: 5058c18a-e06c-408b-840f-69a7e250dbca
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.498494   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"1904","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0528 15:28:07.682103   14720 request.go:629] Waited for 182.4692ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:07.682296   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:07.682296   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.682296   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.682372   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.682731   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.686304   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.686304   14720 round_trippers.go:580]     Audit-Id: ee2a1c21-53ec-4c0f-a854-afa633edfa9d
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.686422   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.686422   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.686551   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:07.687231   14720 pod_ready.go:92] pod "kube-scheduler-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:07.687340   14720 pod_ready.go:81] duration metric: took 393.2139ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.687340   14720 pod_ready.go:38] duration metric: took 5.7248958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:28:07.687340   14720 api_server.go:52] waiting for apiserver process to appear ...
	I0528 15:28:07.699591   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:28:07.726699   14720 command_runner.go:130] > 1882
	I0528 15:28:07.726699   14720 api_server.go:72] duration metric: took 31.5905118s to wait for apiserver process to appear ...
	I0528 15:28:07.726699   14720 api_server.go:88] waiting for apiserver healthz status ...
	I0528 15:28:07.726699   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:28:07.732815   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 200:
	ok
	I0528 15:28:07.735554   14720 round_trippers.go:463] GET https://172.18.212.160:8443/version
	I0528 15:28:07.735554   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.735554   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.735554   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.736091   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.737876   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Audit-Id: c2d7d9cc-1d00-4255-98ae-43b6a12849f4
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.737876   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.737876   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.737988   14720 round_trippers.go:580]     Content-Length: 263
	I0528 15:28:07.737988   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.738029   14720 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 15:28:07.738029   14720 api_server.go:141] control plane version: v1.30.1
	I0528 15:28:07.738029   14720 api_server.go:131] duration metric: took 11.3306ms to wait for apiserver health ...
	I0528 15:28:07.738029   14720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 15:28:07.896078   14720 request.go:629] Waited for 157.8905ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:07.896307   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:07.896377   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.896377   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.896377   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.897093   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.902717   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.902717   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.902717   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Audit-Id: 0f563ce7-986d-4688-a008-0aa977b89955
	I0528 15:28:07.904475   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86561 chars]
	I0528 15:28:07.908310   14720 system_pods.go:59] 12 kube-system pods found
	I0528 15:28:07.908310   14720 system_pods.go:61] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:28:07.908310   14720 system_pods.go:74] duration metric: took 170.2799ms to wait for pod list to return data ...
	I0528 15:28:07.908310   14720 default_sa.go:34] waiting for default service account to be created ...
	I0528 15:28:08.096890   14720 request.go:629] Waited for 187.6933ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:28:08.096890   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:28:08.096890   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.096890   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.096890   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.097678   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.101167   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Content-Length: 262
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Audit-Id: 427cddd0-59d9-4555-a3f2-43b6d339ccb8
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.101262   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.101262   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.101262   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.101262   14720 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1a497e7b-5280-4f3b-b9dc-265476c8a394","resourceVersion":"364","creationTimestamp":"2024-05-28T22:04:31Z"}}]}
	I0528 15:28:08.101762   14720 default_sa.go:45] found service account: "default"
	I0528 15:28:08.101762   14720 default_sa.go:55] duration metric: took 193.4518ms for default service account to be created ...
	I0528 15:28:08.101916   14720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 15:28:08.290437   14720 request.go:629] Waited for 188.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:08.290616   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:08.290616   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.290616   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.290616   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.291351   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.291351   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.296883   14720 round_trippers.go:580]     Audit-Id: f903ad3b-c5ff-4e7b-8ed3-aa74c240c124
	I0528 15:28:08.296953   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.296969   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.296969   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.297056   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.297056   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.298604   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86561 chars]
	I0528 15:28:08.303231   14720 system_pods.go:86] 12 kube-system pods found
	I0528 15:28:08.303231   14720 system_pods.go:89] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:28:08.303231   14720 system_pods.go:126] duration metric: took 201.3144ms to wait for k8s-apps to be running ...
	I0528 15:28:08.303231   14720 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 15:28:08.315547   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:28:08.340531   14720 system_svc.go:56] duration metric: took 37.3002ms WaitForService to wait for kubelet
	I0528 15:28:08.340531   14720 kubeadm.go:576] duration metric: took 32.2043422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:28:08.340531   14720 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:28:08.482142   14720 request.go:629] Waited for 141.6102ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes
	I0528 15:28:08.482449   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes
	I0528 15:28:08.482449   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.482449   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.482449   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.490687   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.490687   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.490687   14720 round_trippers.go:580]     Audit-Id: e5dad35e-a386-4963-8368-b04f7e98ea05
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.490754   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.490754   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.490928   14720 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15502 chars]
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:105] duration metric: took 151.418ms to run NodePressure ...
	I0528 15:28:08.491949   14720 start.go:240] waiting for startup goroutines ...
	I0528 15:28:08.491949   14720 start.go:245] waiting for cluster config update ...
	I0528 15:28:08.491949   14720 start.go:254] writing updated cluster config ...
	I0528 15:28:08.497636   14720 out.go:177] 
	I0528 15:28:08.512223   14720 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:28:08.512782   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:28:08.512984   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:08.520270   14720 out.go:177] * Starting "multinode-517300-m02" worker node in "multinode-517300" cluster
	I0528 15:28:08.526179   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:28:08.526254   14720 cache.go:56] Caching tarball of preloaded images
	I0528 15:28:08.526752   14720 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:28:08.527019   14720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:28:08.527245   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:08.529569   14720 start.go:360] acquireMachinesLock for multinode-517300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:28:08.529569   14720 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-517300-m02"
	I0528 15:28:08.530197   14720 start.go:96] Skipping create...Using existing machine configuration
	I0528 15:28:08.530197   14720 fix.go:54] fixHost starting: m02
	I0528 15:28:08.530263   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:10.601004   14720 main.go:141] libmachine: [stdout =====>] : Off
	
	I0528 15:28:10.601004   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:10.601004   14720 fix.go:112] recreateIfNeeded on multinode-517300-m02: state=Stopped err=<nil>
	W0528 15:28:10.601004   14720 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 15:28:10.606025   14720 out.go:177] * Restarting existing hyperv VM for "multinode-517300-m02" ...
	I0528 15:28:10.608371   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300-m02
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:13.653109   14720 main.go:141] libmachine: Waiting for host to start...
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:15.896343   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:15.896570   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:15.896570   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:18.408759   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:18.417930   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:19.432997   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:24.187148   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:24.187148   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:25.201546   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:27.387414   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:27.387457   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:27.387532   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:29.915881   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:29.921433   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:30.935225   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:33.161804   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:33.161804   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:33.168045   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:35.685374   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:35.685374   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:36.691368   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:38.912262   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:38.912262   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:38.912546   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:41.417385   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:41.417385   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:41.430496   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:46.070748   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:46.081958   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:46.082236   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:46.084379   14720 machine.go:94] provisionDockerMachine start ...
	I0528 15:28:46.084379   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:48.201436   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:48.212279   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:48.212279   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:50.689635   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:50.689635   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:50.707276   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:28:50.708366   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:28:50.708366   14720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:28:50.844070   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:28:50.844070   14720 buildroot.go:166] provisioning hostname "multinode-517300-m02"
	I0528 15:28:50.844070   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:52.951176   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:52.951176   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:52.962005   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:55.479679   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:55.479679   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:55.497494   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:28:55.498052   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:28:55.498052   14720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300-m02 && echo "multinode-517300-m02" | sudo tee /etc/hostname
	I0528 15:28:55.658860   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300-m02
	
	I0528 15:28:55.658860   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:57.728544   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:57.728544   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:57.740730   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:00.284213   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:00.295451   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:00.301384   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:00.301384   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:00.301964   14720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:29:00.455898   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:29:00.455898   14720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:29:00.455898   14720 buildroot.go:174] setting up certificates
	I0528 15:29:00.455898   14720 provision.go:84] configureAuth start
	I0528 15:29:00.455898   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:02.528540   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:02.528540   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:02.539421   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:05.104018   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:05.115205   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:05.115205   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:07.189111   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:07.201160   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:07.201160   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:09.669469   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:09.680070   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:09.680070   14720 provision.go:143] copyHostCerts
	I0528 15:29:09.680190   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:29:09.680190   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:29:09.680190   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:29:09.680869   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:29:09.682037   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:29:09.682158   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:29:09.682158   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:29:09.682788   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:29:09.683766   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:29:09.683831   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:29:09.683831   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:29:09.684364   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:29:09.685765   14720 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300-m02 san=[127.0.0.1 172.18.212.184 localhost minikube multinode-517300-m02]
	I0528 15:29:09.781810   14720 provision.go:177] copyRemoteCerts
	I0528 15:29:09.795340   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:29:09.795433   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:11.888132   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:11.898848   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:11.899047   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:14.388985   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:14.388985   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:14.399699   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:14.508090   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7126169s)
	I0528 15:29:14.508090   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:29:14.508090   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:29:14.559204   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:29:14.559371   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0528 15:29:14.603083   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:29:14.603530   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 15:29:14.649140   14720 provision.go:87] duration metric: took 14.193197s to configureAuth
	I0528 15:29:14.649215   14720 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:29:14.649447   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:29:14.649447   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:16.780294   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:16.786953   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:16.786953   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:19.285551   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:19.285551   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:19.301355   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:19.302118   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:19.302118   14720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:29:19.436255   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:29:19.436255   14720 buildroot.go:70] root file system type: tmpfs
	I0528 15:29:19.436784   14720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:29:19.436931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:21.543859   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:21.555238   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:21.555238   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:24.106892   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:24.106892   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:24.113894   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:24.114646   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:24.114646   14720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.212.160"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:29:24.280026   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.212.160
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:29:24.280026   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:26.352349   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:26.363353   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:26.363353   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:28.879560   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:28.879715   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:28.885356   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:28.885988   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:28.885988   14720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:29:31.207338   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:29:31.207892   14720 machine.go:97] duration metric: took 45.1233681s to provisionDockerMachine
	I0528 15:29:31.207892   14720 start.go:293] postStartSetup for "multinode-517300-m02" (driver="hyperv")
	I0528 15:29:31.207892   14720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:29:31.219883   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:29:31.219883   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:33.264306   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:33.275067   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:33.275067   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:35.762354   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:35.762354   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:35.773602   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:35.883275   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6633773s)
	I0528 15:29:35.898387   14720 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:29:35.904482   14720 command_runner.go:130] > NAME=Buildroot
	I0528 15:29:35.904741   14720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:29:35.904741   14720 command_runner.go:130] > ID=buildroot
	I0528 15:29:35.904741   14720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:29:35.904741   14720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:29:35.904741   14720 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:29:35.904741   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:29:35.904741   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:29:35.906099   14720 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:29:35.906099   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:29:35.917896   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:29:35.935907   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:29:35.992912   14720 start.go:296] duration metric: took 4.7850047s for postStartSetup
	I0528 15:29:35.992912   14720 fix.go:56] duration metric: took 1m27.4624344s for fixHost
	I0528 15:29:35.992912   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:38.096190   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:38.096190   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:38.107030   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:40.613931   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:40.624338   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:40.629385   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:40.630312   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:40.630312   14720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 15:29:40.765152   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716935380.765610513
	
	I0528 15:29:40.765152   14720 fix.go:216] guest clock: 1716935380.765610513
	I0528 15:29:40.765152   14720 fix.go:229] Guest: 2024-05-28 15:29:40.765610513 -0700 PDT Remote: 2024-05-28 15:29:35.9929122 -0700 PDT m=+251.282553201 (delta=4.772698313s)
	I0528 15:29:40.765152   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:42.835864   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:42.846861   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:42.846988   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:45.315454   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:45.315527   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:45.322127   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:45.322886   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:45.322886   14720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716935380
	I0528 15:29:45.472129   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:29:40 UTC 2024
	
	I0528 15:29:45.472129   14720 fix.go:236] clock set: Tue May 28 22:29:40 UTC 2024
	 (err=<nil>)
	I0528 15:29:45.472129   14720 start.go:83] releasing machines lock for "multinode-517300-m02", held for 1m36.9422501s
	I0528 15:29:45.472129   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:47.553011   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:47.553011   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:47.564528   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:50.028657   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:50.040719   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:50.041999   14720 out.go:177] * Found network options:
	I0528 15:29:50.046526   14720 out.go:177]   - NO_PROXY=172.18.212.160
	W0528 15:29:50.050048   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:29:50.052412   14720 out.go:177]   - NO_PROXY=172.18.212.160
	W0528 15:29:50.054558   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 15:29:50.056218   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:29:50.060034   14720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:29:50.060101   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:50.069772   14720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:29:50.069772   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:52.204212   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:52.218842   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:54.889707   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:54.890037   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:54.890109   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:54.912631   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:54.912631   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:54.912631   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:54.973580   14720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0528 15:29:54.979199   14720 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9093185s)
	W0528 15:29:54.979281   14720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:29:54.989848   14720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:29:55.113374   14720 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:29:55.114177   14720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:29:55.114177   14720 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0541271s)
	I0528 15:29:55.114375   14720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:29:55.114576   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:29:55.114774   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:29:55.152758   14720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:29:55.164721   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:29:55.196897   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:29:55.213694   14720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:29:55.225645   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:29:55.253575   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:29:55.285652   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:29:55.317283   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:29:55.347726   14720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:29:55.378344   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:29:55.410526   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:29:55.440490   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:29:55.472958   14720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:29:55.490085   14720 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:29:55.502802   14720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:29:55.532937   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:29:55.710667   14720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:29:55.739870   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:29:55.752929   14720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:29:55.777995   14720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:29:55.777995   14720 command_runner.go:130] > [Unit]
	I0528 15:29:55.777995   14720 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:29:55.777995   14720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:29:55.777995   14720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:29:55.777995   14720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:29:55.778144   14720 command_runner.go:130] > StartLimitBurst=3
	I0528 15:29:55.778144   14720 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:29:55.778144   14720 command_runner.go:130] > [Service]
	I0528 15:29:55.778144   14720 command_runner.go:130] > Type=notify
	I0528 15:29:55.778270   14720 command_runner.go:130] > Restart=on-failure
	I0528 15:29:55.778270   14720 command_runner.go:130] > Environment=NO_PROXY=172.18.212.160
	I0528 15:29:55.778270   14720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:29:55.778270   14720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:29:55.778362   14720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:29:55.778393   14720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:29:55.778449   14720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:29:55.778449   14720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:29:55.778449   14720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:29:55.778554   14720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:29:55.778554   14720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:29:55.778554   14720 command_runner.go:130] > ExecStart=
	I0528 15:29:55.778612   14720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:29:55.778612   14720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:29:55.778674   14720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:29:55.778674   14720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitCORE=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:29:55.778797   14720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:29:55.778797   14720 command_runner.go:130] > TasksMax=infinity
	I0528 15:29:55.778797   14720 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:29:55.778851   14720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:29:55.778851   14720 command_runner.go:130] > Delegate=yes
	I0528 15:29:55.778851   14720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:29:55.778919   14720 command_runner.go:130] > KillMode=process
	I0528 15:29:55.778919   14720 command_runner.go:130] > [Install]
	I0528 15:29:55.779046   14720 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:29:55.793918   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:29:55.823952   14720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:29:55.871672   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:29:55.905969   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:29:55.940962   14720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:29:56.011027   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:29:56.036460   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:29:56.076643   14720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:29:56.089015   14720 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:29:56.098811   14720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:29:56.112230   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:29:56.129836   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:29:56.173660   14720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:29:56.368200   14720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:29:56.556789   14720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:29:56.557056   14720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:29:56.600565   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:29:56.785552   14720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:30:57.912336   14720 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0528 15:30:57.920389   14720 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0528 15:30:57.920848   14720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1351003s)
	I0528 15:30:57.934941   14720 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.500677857Z" level=info msg="Starting up"
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.501766571Z" level=info msg="containerd not running, starting managed containerd"
	I0528 15:30:57.957928   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.502898485Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	I0528 15:30:57.957928   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.537329523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0528 15:30:57.957973   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563154951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0528 15:30:57.958068   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563311253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0528 15:30:57.958068   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563406754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0528 15:30:57.958156   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563442154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958181   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564365066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958181   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564494468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958254   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564793272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958254   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564927573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564949473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564962374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.565971886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958407   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.566833297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958493   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570415643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958493   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570534644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958563   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570756147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958563   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570857449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571407656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571547257Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571597158Z" level=info msg="metadata content store policy set" policy=shared
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581166780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581309981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581335582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581351182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581365282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581434283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581870988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582042691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582159792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582181092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582197993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582212693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958987   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582227093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958987   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582242193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959055   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582258493Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959055   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582273294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959131   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582286894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959131   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582301494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959183   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582345495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959230   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582360795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959252   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582383395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959252   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582402195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959312   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582416195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959335   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582429996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582442996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582456096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582504297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582520897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582538397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582560197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582608898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582626498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582646498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582659398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582704899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583162805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583366707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583523709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583669411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583811613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583964415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.584148417Z" level=info msg="NRI interface is disabled by configuration."
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586815751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586940653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587201956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587242357Z" level=info msg="containerd successfully booted in 0.053418s"
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.554747831Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.640291350Z" level=info msg="Loading containers: start."
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.974269271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.063613046Z" level=info msg="Loading containers: done."
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.102681859Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.103541310Z" level=info msg="Daemon has completed initialization"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 systemd[1]: Started Docker Application Container Engine.
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209689346Z" level=info msg="API listen on /var/run/docker.sock"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209883867Z" level=info msg="API listen on [::]:2376"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.813831886Z" level=info msg="Processing signal 'terminated'"
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816073902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816890571Z" level=info msg="Daemon shutdown complete"
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817202059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817269957Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0528 15:30:57.960195   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0528 15:30:57.960195   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0528 15:30:57.960259   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0528 15:30:57.960259   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 dockerd[1037]: time="2024-05-28T22:29:57.888943493Z" level=info msg="Starting up"
	I0528 15:30:57.960314   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 dockerd[1037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0528 15:30:57.968429   14720 out.go:177] 
	W0528 15:30:57.969898   14720 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 28 22:29:29 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.500677857Z" level=info msg="Starting up"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.501766571Z" level=info msg="containerd not running, starting managed containerd"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.502898485Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.537329523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563154951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563311253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563406754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563442154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564365066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564494468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564793272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564927573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564949473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564962374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.565971886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.566833297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570415643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570534644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570756147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570857449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571407656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571547257Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571597158Z" level=info msg="metadata content store policy set" policy=shared
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581166780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581309981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581335582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581351182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581365282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581434283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581870988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582042691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582159792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582181092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582197993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582212693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582227093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582242193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582258493Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582273294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582286894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582301494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582345495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582360795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582383395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582402195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582416195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582429996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582442996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582456096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582504297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582520897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582538397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582560197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582608898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582626498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582646498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582659398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582704899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583162805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583366707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583523709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583669411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583811613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583964415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.584148417Z" level=info msg="NRI interface is disabled by configuration."
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586815751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586940653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587201956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587242357Z" level=info msg="containerd successfully booted in 0.053418s"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.554747831Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.640291350Z" level=info msg="Loading containers: start."
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.974269271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.063613046Z" level=info msg="Loading containers: done."
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.102681859Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.103541310Z" level=info msg="Daemon has completed initialization"
	May 28 22:29:31 multinode-517300-m02 systemd[1]: Started Docker Application Container Engine.
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209689346Z" level=info msg="API listen on /var/run/docker.sock"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209883867Z" level=info msg="API listen on [::]:2376"
	May 28 22:29:56 multinode-517300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.813831886Z" level=info msg="Processing signal 'terminated'"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816073902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816890571Z" level=info msg="Daemon shutdown complete"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817202059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817269957Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 28 22:29:57 multinode-517300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:57 multinode-517300-m02 dockerd[1037]: time="2024-05-28T22:29:57.888943493Z" level=info msg="Starting up"
	May 28 22:30:57 multinode-517300-m02 dockerd[1037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 28 22:30:57 multinode-517300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 28 22:29:29 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.500677857Z" level=info msg="Starting up"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.501766571Z" level=info msg="containerd not running, starting managed containerd"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.502898485Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.537329523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563154951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563311253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563406754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563442154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564365066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564494468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564793272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564927573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564949473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564962374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.565971886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.566833297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570415643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570534644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570756147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570857449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571407656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571547257Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571597158Z" level=info msg="metadata content store policy set" policy=shared
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581166780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581309981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581335582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581351182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581365282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581434283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581870988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582042691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582159792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582181092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582197993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582212693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582227093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582242193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582258493Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582273294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582286894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582301494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582345495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582360795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582383395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582402195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582416195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582429996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582442996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582456096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582504297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582520897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582538397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582560197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582608898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582626498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582646498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582659398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582704899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583162805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583366707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583523709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583669411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583811613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583964415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.584148417Z" level=info msg="NRI interface is disabled by configuration."
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586815751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586940653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587201956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587242357Z" level=info msg="containerd successfully booted in 0.053418s"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.554747831Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.640291350Z" level=info msg="Loading containers: start."
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.974269271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.063613046Z" level=info msg="Loading containers: done."
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.102681859Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.103541310Z" level=info msg="Daemon has completed initialization"
	May 28 22:29:31 multinode-517300-m02 systemd[1]: Started Docker Application Container Engine.
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209689346Z" level=info msg="API listen on /var/run/docker.sock"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209883867Z" level=info msg="API listen on [::]:2376"
	May 28 22:29:56 multinode-517300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.813831886Z" level=info msg="Processing signal 'terminated'"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816073902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816890571Z" level=info msg="Daemon shutdown complete"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817202059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817269957Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 28 22:29:57 multinode-517300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:57 multinode-517300-m02 dockerd[1037]: time="2024-05-28T22:29:57.888943493Z" level=info msg="Starting up"
	May 28 22:30:57 multinode-517300-m02 dockerd[1037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 28 22:30:57 multinode-517300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0528 15:30:57.971564   14720 out.go:239] * 
	* 
	W0528 15:30:57.973056   14720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 15:30:57.974164   14720 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-517300" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-517300
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-517300	172.18.214.123
multinode-517300-m02	172.18.223.89
multinode-517300-m03	172.18.218.70

                                                
                                                
After restart: multinode-517300	172.18.212.160
multinode-517300-m02	172.18.212.184
multinode-517300-m03	172.18.218.70
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-517300 -n multinode-517300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-517300 -n multinode-517300: (11.7369s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 logs -n 25: (8.513498s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:15 PDT | 28 May 24 15:16 PDT |
	|         | multinode-517300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:16 PDT | 28 May 24 15:16 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:16 PDT | 28 May 24 15:16 PDT |
	|         | multinode-517300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:16 PDT | 28 May 24 15:16 PDT |
	|         | multinode-517300:/home/docker/cp-test_multinode-517300-m02_multinode-517300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:16 PDT | 28 May 24 15:16 PDT |
	|         | multinode-517300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n multinode-517300 sudo cat                                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:16 PDT | 28 May 24 15:17 PDT |
	|         | /home/docker/cp-test_multinode-517300-m02_multinode-517300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:17 PDT |
	|         | multinode-517300-m03:/home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:17 PDT |
	|         | multinode-517300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n multinode-517300-m03 sudo cat                                                                   | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:17 PDT |
	|         | /home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp testdata\cp-test.txt                                                                                | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:17 PDT |
	|         | multinode-517300-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:17 PDT |
	|         | multinode-517300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:17 PDT | 28 May 24 15:18 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:18 PDT | 28 May 24 15:18 PDT |
	|         | multinode-517300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:18 PDT | 28 May 24 15:18 PDT |
	|         | multinode-517300:/home/docker/cp-test_multinode-517300-m03_multinode-517300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:18 PDT | 28 May 24 15:18 PDT |
	|         | multinode-517300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n multinode-517300 sudo cat                                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:18 PDT | 28 May 24 15:18 PDT |
	|         | /home/docker/cp-test_multinode-517300-m03_multinode-517300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt                                                       | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:18 PDT | 28 May 24 15:19 PDT |
	|         | multinode-517300-m02:/home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n                                                                                                 | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:19 PDT | 28 May 24 15:19 PDT |
	|         | multinode-517300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-517300 ssh -n multinode-517300-m02 sudo cat                                                                   | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:19 PDT | 28 May 24 15:19 PDT |
	|         | /home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-517300 node stop m03                                                                                          | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:19 PDT | 28 May 24 15:19 PDT |
	| node    | multinode-517300 node start                                                                                             | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:20 PDT | 28 May 24 15:23 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-517300                                                                                                | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:23 PDT |                     |
	| stop    | -p multinode-517300                                                                                                     | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:23 PDT | 28 May 24 15:25 PDT |
	| start   | -p multinode-517300                                                                                                     | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:25 PDT |                     |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-517300                                                                                                | multinode-517300 | minikube1\jenkins | v1.33.1 | 28 May 24 15:30 PDT |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 15:25:24
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 15:25:24.793120   14720 out.go:291] Setting OutFile to fd 744 ...
	I0528 15:25:24.793120   14720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:25:24.793120   14720 out.go:304] Setting ErrFile to fd 664...
	I0528 15:25:24.793120   14720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:25:24.820227   14720 out.go:298] Setting JSON to false
	I0528 15:25:24.821968   14720 start.go:129] hostinfo: {"hostname":"minikube1","uptime":7889,"bootTime":1716927235,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 15:25:24.821968   14720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 15:25:24.941113   14720 out.go:177] * [multinode-517300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 15:25:24.970595   14720 notify.go:220] Checking for updates...
	I0528 15:25:25.000722   14720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:25:25.019859   14720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 15:25:25.072083   14720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 15:25:25.079644   14720 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 15:25:25.106400   14720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 15:25:25.115758   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:25:25.116040   14720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 15:25:30.367697   14720 out.go:177] * Using the hyperv driver based on existing profile
	I0528 15:25:30.397321   14720 start.go:297] selected driver: hyperv
	I0528 15:25:30.404488   14720 start.go:901] validating driver "hyperv" against &{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:25:30.405100   14720 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 15:25:30.458688   14720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:25:30.458688   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:25:30.458688   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:25:30.458688   14720 start.go:340] cluster config:
	{Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.214.123 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:25:30.458688   14720 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 15:25:30.556966   14720 out.go:177] * Starting "multinode-517300" primary control-plane node in "multinode-517300" cluster
	I0528 15:25:30.560205   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:25:30.560606   14720 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 15:25:30.560606   14720 cache.go:56] Caching tarball of preloaded images
	I0528 15:25:30.561141   14720 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:25:30.561340   14720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:25:30.561636   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:25:30.564285   14720 start.go:360] acquireMachinesLock for multinode-517300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:25:30.564285   14720 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-517300"
	I0528 15:25:30.564843   14720 start.go:96] Skipping create...Using existing machine configuration
	I0528 15:25:30.564843   14720 fix.go:54] fixHost starting: 
	I0528 15:25:30.565756   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:33.198728   14720 main.go:141] libmachine: [stdout =====>] : Off
	
	I0528 15:25:33.198728   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:33.198728   14720 fix.go:112] recreateIfNeeded on multinode-517300: state=Stopped err=<nil>
	W0528 15:25:33.198728   14720 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 15:25:33.259705   14720 out.go:177] * Restarting existing hyperv VM for "multinode-517300" ...
	I0528 15:25:33.301807   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300
	I0528 15:25:36.370835   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:36.370835   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:36.370835   14720 main.go:141] libmachine: Waiting for host to start...
	I0528 15:25:36.370990   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:38.603132   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:38.603132   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:38.603223   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:41.047045   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:41.047045   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:42.047954   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:44.242847   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:44.253935   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:44.253935   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:46.657784   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:46.657784   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:47.669960   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:49.798233   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:49.798233   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:49.800636   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:52.224274   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:52.228477   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:53.231006   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:25:55.369449   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:25:55.369449   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:55.374368   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:25:57.809941   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:25:57.809941   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:25:58.828940   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:00.956568   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:00.956568   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:00.967839   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:03.423642   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:03.425986   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:03.428609   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:05.465538   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:05.476821   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:05.476947   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:07.885570   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:07.885570   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:07.897099   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:26:07.899911   14720 machine.go:94] provisionDockerMachine start ...
	I0528 15:26:07.899911   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:09.990958   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:09.990958   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:09.991114   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:12.467912   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:12.467912   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:12.474218   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:12.474419   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:12.474419   14720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:26:12.602030   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:26:12.602292   14720 buildroot.go:166] provisioning hostname "multinode-517300"
	I0528 15:26:12.602382   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:14.681328   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:14.681328   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:14.692586   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:17.154326   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:17.154326   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:17.172216   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:17.172741   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:17.172741   14720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300 && echo "multinode-517300" | sudo tee /etc/hostname
	I0528 15:26:17.323860   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300
	
	I0528 15:26:17.323942   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:19.394560   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:19.394560   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:19.394856   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:21.842142   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:21.842142   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:21.857046   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:21.857942   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:21.857942   14720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:26:21.992394   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:26:21.992394   14720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:26:21.992394   14720 buildroot.go:174] setting up certificates
	I0528 15:26:21.992394   14720 provision.go:84] configureAuth start
	I0528 15:26:21.992394   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:24.066357   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:24.066357   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:24.082083   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:26.535800   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:26.535800   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:26.546807   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:28.575768   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:31.019244   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:31.019244   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:31.019458   14720 provision.go:143] copyHostCerts
	I0528 15:26:31.019716   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:26:31.020471   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:26:31.020471   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:26:31.021228   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:26:31.022957   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:26:31.023405   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:26:31.023545   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:26:31.024015   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:26:31.025597   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:26:31.025992   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:26:31.025992   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:26:31.026532   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:26:31.027345   14720 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300 san=[127.0.0.1 172.18.212.160 localhost minikube multinode-517300]
	I0528 15:26:31.173694   14720 provision.go:177] copyRemoteCerts
	I0528 15:26:31.185623   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:26:31.185803   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:33.210400   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:33.210400   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:33.222187   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:35.698699   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:35.698699   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:35.699042   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:26:35.798411   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.612684s)
	I0528 15:26:35.798559   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:26:35.799250   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:26:35.840364   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:26:35.840364   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0528 15:26:35.884607   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:26:35.885139   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 15:26:35.925303   14720 provision.go:87] duration metric: took 13.9328657s to configureAuth
	I0528 15:26:35.925303   14720 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:26:35.926251   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:26:35.926251   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:37.987033   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:40.420160   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:40.420160   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:40.436760   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:40.437540   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:40.437540   14720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:26:40.556271   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:26:40.556362   14720 buildroot.go:70] root file system type: tmpfs
	I0528 15:26:40.556496   14720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:26:40.556496   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:42.553713   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:42.553713   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:42.566120   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:44.985445   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:44.985445   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:45.001160   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:45.001925   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:45.001925   14720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:26:45.156257   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:26:45.156454   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:47.163834   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:47.163834   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:47.175756   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:49.576490   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:49.576490   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:49.596427   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:26:49.596427   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:26:49.596427   14720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:26:51.980270   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:26:51.980270   14720 machine.go:97] duration metric: took 44.0802217s to provisionDockerMachine
	I0528 15:26:51.980270   14720 start.go:293] postStartSetup for "multinode-517300" (driver="hyperv")
	I0528 15:26:51.980270   14720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:26:51.990863   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:26:51.990863   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:54.032602   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:26:56.454286   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:26:56.465927   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:56.465927   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:26:56.569122   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5781418s)
	I0528 15:26:56.581366   14720 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:26:56.589288   14720 command_runner.go:130] > NAME=Buildroot
	I0528 15:26:56.589288   14720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:26:56.589288   14720 command_runner.go:130] > ID=buildroot
	I0528 15:26:56.589288   14720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:26:56.589288   14720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:26:56.589406   14720 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:26:56.589453   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:26:56.589945   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:26:56.590584   14720 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:26:56.590584   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:26:56.603804   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:26:56.621097   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:26:56.663288   14720 start.go:296] duration metric: took 4.6830028s for postStartSetup
	I0528 15:26:56.663288   14720 fix.go:56] duration metric: took 1m26.0981766s for fixHost
	I0528 15:26:56.663288   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:26:58.778704   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:26:58.789832   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:26:58.790052   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:01.359420   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:01.363286   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:01.368997   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:27:01.370106   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:27:01.370175   14720 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 15:27:01.501827   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716935221.502949732
	
	I0528 15:27:01.501827   14720 fix.go:216] guest clock: 1716935221.502949732
	I0528 15:27:01.501827   14720 fix.go:229] Guest: 2024-05-28 15:27:01.502949732 -0700 PDT Remote: 2024-05-28 15:26:56.6632883 -0700 PDT m=+91.953439201 (delta=4.839661432s)
	I0528 15:27:01.501827   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:03.564337   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:05.993477   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:05.993477   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:05.999949   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:27:06.000095   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.160 22 <nil> <nil>}
	I0528 15:27:06.000095   14720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716935221
	I0528 15:27:06.136645   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:27:01 UTC 2024
	
	I0528 15:27:06.136761   14720 fix.go:236] clock set: Tue May 28 22:27:01 UTC 2024
	 (err=<nil>)
	I0528 15:27:06.136761   14720 start.go:83] releasing machines lock for "multinode-517300", held for 1m35.5721784s
	I0528 15:27:06.136761   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:08.172243   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:08.172243   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:08.184445   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:10.623498   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:10.634300   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:10.639294   14720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:27:10.639462   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:10.649333   14720 ssh_runner.go:195] Run: cat /version.json
	I0528 15:27:10.649333   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:27:12.810282   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:12.810282   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:12.810402   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:27:15.486669   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:15.486888   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:15.486888   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:27:15.499652   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.160
	
	I0528 15:27:15.499652   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:27:15.505114   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.160 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:27:15.797854   14720 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:27:15.797920   14720 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1585539s)
	I0528 15:27:15.797920   14720 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 15:27:15.797920   14720 ssh_runner.go:235] Completed: cat /version.json: (5.1485706s)
	I0528 15:27:15.810189   14720 ssh_runner.go:195] Run: systemctl --version
	I0528 15:27:15.820078   14720 command_runner.go:130] > systemd 252 (252)
	I0528 15:27:15.820078   14720 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 15:27:15.832074   14720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:27:15.842892   14720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 15:27:15.844678   14720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:27:15.856213   14720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:27:15.885325   14720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:27:15.885325   14720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:27:15.885325   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:27:15.885325   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:27:15.919459   14720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:27:15.932235   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:27:15.963657   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:27:15.983429   14720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:27:16.000076   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:27:16.033662   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:27:16.070882   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:27:16.104322   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:27:16.137991   14720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:27:16.172248   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:27:16.205462   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:27:16.237391   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:27:16.269287   14720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:27:16.286547   14720 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:27:16.299631   14720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:27:16.329340   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:16.528272   14720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:27:16.557346   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:27:16.569543   14720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:27:16.593896   14720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:27:16.593964   14720 command_runner.go:130] > [Unit]
	I0528 15:27:16.593964   14720 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:27:16.593964   14720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:27:16.593964   14720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:27:16.593964   14720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:27:16.593964   14720 command_runner.go:130] > StartLimitBurst=3
	I0528 15:27:16.594042   14720 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:27:16.594042   14720 command_runner.go:130] > [Service]
	I0528 15:27:16.594042   14720 command_runner.go:130] > Type=notify
	I0528 15:27:16.594042   14720 command_runner.go:130] > Restart=on-failure
	I0528 15:27:16.594100   14720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:27:16.594119   14720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:27:16.594119   14720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:27:16.594119   14720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:27:16.594119   14720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:27:16.594181   14720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:27:16.594235   14720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:27:16.594235   14720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:27:16.594235   14720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecStart=
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:27:16.594235   14720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:27:16.594235   14720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > LimitCORE=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:27:16.594235   14720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:27:16.594235   14720 command_runner.go:130] > TasksMax=infinity
	I0528 15:27:16.594235   14720 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:27:16.594235   14720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:27:16.594235   14720 command_runner.go:130] > Delegate=yes
	I0528 15:27:16.594235   14720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:27:16.594235   14720 command_runner.go:130] > KillMode=process
	I0528 15:27:16.594235   14720 command_runner.go:130] > [Install]
	I0528 15:27:16.594235   14720 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:27:16.606927   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:27:16.640646   14720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:27:16.683361   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:27:16.719062   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:27:16.753625   14720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:27:16.817746   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:27:16.844377   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:27:16.876924   14720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:27:16.889474   14720 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:27:16.896203   14720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:27:16.907299   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:27:16.925500   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:27:16.970096   14720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:27:17.167851   14720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:27:17.349334   14720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:27:17.349554   14720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:27:17.394177   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:17.587080   14720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:27:20.179527   14720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5923022s)
	I0528 15:27:20.194549   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 15:27:20.228650   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:27:20.262024   14720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 15:27:20.451327   14720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 15:27:20.653768   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:20.852229   14720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 15:27:20.894838   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 15:27:20.931900   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:21.119151   14720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 15:27:21.227604   14720 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 15:27:21.240570   14720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 15:27:21.248372   14720 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0528 15:27:21.248907   14720 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 15:27:21.248907   14720 command_runner.go:130] > Device: 0,22	Inode: 866         Links: 1
	I0528 15:27:21.248907   14720 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0528 15:27:21.248907   14720 command_runner.go:130] > Access: 2024-05-28 22:27:21.146794483 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] > Modify: 2024-05-28 22:27:21.146794483 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] > Change: 2024-05-28 22:27:21.149794500 +0000
	I0528 15:27:21.248953   14720 command_runner.go:130] >  Birth: -
	I0528 15:27:21.248991   14720 start.go:562] Will wait 60s for crictl version
	I0528 15:27:21.259456   14720 ssh_runner.go:195] Run: which crictl
	I0528 15:27:21.264946   14720 command_runner.go:130] > /usr/bin/crictl
	I0528 15:27:21.276914   14720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 15:27:21.330043   14720 command_runner.go:130] > Version:  0.1.0
	I0528 15:27:21.330043   14720 command_runner.go:130] > RuntimeName:  docker
	I0528 15:27:21.330043   14720 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0528 15:27:21.330179   14720 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 15:27:21.330224   14720 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0528 15:27:21.340066   14720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:27:21.368308   14720 command_runner.go:130] > 26.0.2
	I0528 15:27:21.377922   14720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 15:27:21.404736   14720 command_runner.go:130] > 26.0.2
	I0528 15:27:21.409882   14720 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.0.2 ...
	I0528 15:27:21.409882   14720 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0528 15:27:21.414932   14720 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:91:32:cc Flags:up|broadcast|multicast|running}
	I0528 15:27:21.418820   14720 ip.go:210] interface addr: fe80::6a65:d82d:f435:d71f/64
	I0528 15:27:21.418820   14720 ip.go:210] interface addr: 172.18.208.1/20
	I0528 15:27:21.433322   14720 ssh_runner.go:195] Run: grep 172.18.208.1	host.minikube.internal$ /etc/hosts
	I0528 15:27:21.439060   14720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.18.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:27:21.457633   14720 kubeadm.go:877] updating cluster {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 15:27:21.457633   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:27:21.468413   14720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 15:27:21.492932   14720 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 15:27:21.492932   14720 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:27:21.492932   14720 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0528 15:27:21.492932   14720 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0528 15:27:21.492932   14720 docker.go:615] Images already preloaded, skipping extraction
	I0528 15:27:21.502719   14720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 15:27:21.525019   14720 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0528 15:27:21.526008   14720 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0528 15:27:21.526008   14720 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0528 15:27:21.526198   14720 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0528 15:27:21.526198   14720 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 15:27:21.526256   14720 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0528 15:27:21.526435   14720 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0528 15:27:21.526551   14720 cache_images.go:84] Images are preloaded, skipping loading
	I0528 15:27:21.526632   14720 kubeadm.go:928] updating node { 172.18.212.160 8443 v1.30.1 docker true true} ...
	I0528 15:27:21.526897   14720 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-517300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.18.212.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 15:27:21.538037   14720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 15:27:21.573821   14720 command_runner.go:130] > cgroupfs
	I0528 15:27:21.574380   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:27:21.574380   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:27:21.574380   14720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 15:27:21.574380   14720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.18.212.160 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-517300 NodeName:multinode-517300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.18.212.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.18.212.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 15:27:21.575099   14720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.18.212.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-517300"
	  kubeletExtraArgs:
	    node-ip: 172.18.212.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 15:27:21.587378   14720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubeadm
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubectl
	I0528 15:27:21.603658   14720 command_runner.go:130] > kubelet
	I0528 15:27:21.603658   14720 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 15:27:21.617533   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 15:27:21.634311   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0528 15:27:21.662953   14720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 15:27:21.692372   14720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0528 15:27:21.739012   14720 ssh_runner.go:195] Run: grep 172.18.212.160	control-plane.minikube.internal$ /etc/hosts
	I0528 15:27:21.745738   14720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.18.212.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 15:27:21.779248   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:21.957742   14720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:27:21.985257   14720 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300 for IP: 172.18.212.160
	I0528 15:27:21.985339   14720 certs.go:194] generating shared ca certs ...
	I0528 15:27:21.985339   14720 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:21.985540   14720 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0528 15:27:21.986411   14720 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0528 15:27:21.986681   14720 certs.go:256] generating profile certs ...
	I0528 15:27:21.986817   14720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\client.key
	I0528 15:27:21.987488   14720 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f
	I0528 15:27:21.987681   14720 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.18.212.160]
	I0528 15:27:22.246263   14720 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f ...
	I0528 15:27:22.246263   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f: {Name:mkbcc6d34573e723fd93d9d4ee6bd3f7302a4991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:22.247723   14720 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f ...
	I0528 15:27:22.247723   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f: {Name:mk0c16934ffa8089018c3bc9057a82dd25367ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:22.248883   14720 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt.d5a8148f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt
	I0528 15:27:22.261568   14720 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key.d5a8148f -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key
	I0528 15:27:22.277986   14720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key
	I0528 15:27:22.277986   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 15:27:22.277986   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0528 15:27:22.278524   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 15:27:22.278857   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 15:27:22.279006   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 15:27:22.279536   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 15:27:22.280542   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem (1338 bytes)
	W0528 15:27:22.280772   14720 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780_empty.pem, impossibly tiny 0 bytes
	I0528 15:27:22.280772   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0528 15:27:22.280772   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0528 15:27:22.281451   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0528 15:27:22.281723   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0528 15:27:22.282330   14720 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem (1708 bytes)
	I0528 15:27:22.282566   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem -> /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.282690   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.282893   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:22.284074   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 15:27:22.332540   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 15:27:22.373052   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 15:27:22.420377   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0528 15:27:22.469809   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 15:27:22.513606   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 15:27:22.558894   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 15:27:22.594293   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 15:27:22.643957   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\1780.pem --> /usr/share/ca-certificates/1780.pem (1338 bytes)
	I0528 15:27:22.685075   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /usr/share/ca-certificates/17802.pem (1708 bytes)
	I0528 15:27:22.726819   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 15:27:22.781564   14720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 15:27:22.827107   14720 ssh_runner.go:195] Run: openssl version
	I0528 15:27:22.834855   14720 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 15:27:22.841487   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1780.pem && ln -fs /usr/share/ca-certificates/1780.pem /etc/ssl/certs/1780.pem"
	I0528 15:27:22.876983   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.879767   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.883887   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:41 /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.895842   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1780.pem
	I0528 15:27:22.903472   14720 command_runner.go:130] > 51391683
	I0528 15:27:22.916557   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1780.pem /etc/ssl/certs/51391683.0"
	I0528 15:27:22.948338   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
	I0528 15:27:22.980126   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.993518   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:27:22.993518   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:41 /usr/share/ca-certificates/17802.pem
	I0528 15:27:23.005002   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
	I0528 15:27:23.015600   14720 command_runner.go:130] > 3ec20f2e
	I0528 15:27:23.026986   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 15:27:23.063789   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 15:27:23.094341   14720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.101739   14720 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.101739   14720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:25 /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.113610   14720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 15:27:23.121874   14720 command_runner.go:130] > b5213941
	I0528 15:27:23.134903   14720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 15:27:23.164389   14720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:27:23.170788   14720 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 15:27:23.175334   14720 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0528 15:27:23.175397   14720 command_runner.go:130] > Device: 8,1	Inode: 8384338     Links: 1
	I0528 15:27:23.175397   14720 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 15:27:23.175397   14720 command_runner.go:130] > Access: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] > Modify: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] > Change: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.175397   14720 command_runner.go:130] >  Birth: 2024-05-28 22:04:05.481412336 +0000
	I0528 15:27:23.188388   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 15:27:23.197867   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.210811   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 15:27:23.219959   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.232725   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 15:27:23.243412   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.255820   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 15:27:23.266243   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.279379   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 15:27:23.289775   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.301434   14720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 15:27:23.311872   14720 command_runner.go:130] > Certificate will not expire
	I0528 15:27:23.312006   14720 kubeadm.go:391] StartCluster: {Name:multinode-517300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-517300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.18.223.89 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.18.218.70 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 15:27:23.323096   14720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 15:27:23.357235   14720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0528 15:27:23.376407   14720 command_runner.go:130] > /var/lib/minikube/etcd:
	I0528 15:27:23.376407   14720 command_runner.go:130] > member
	W0528 15:27:23.376407   14720 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 15:27:23.376407   14720 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 15:27:23.376407   14720 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 15:27:23.388054   14720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 15:27:23.406450   14720 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 15:27:23.407846   14720 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-517300" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:23.408386   14720 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-517300" cluster setting kubeconfig missing "multinode-517300" context setting]
	I0528 15:27:23.409097   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:23.425000   14720 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:23.425807   14720 kapi.go:59] client config for multinode-517300: &rest.Config{Host:"https://172.18.212.160:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-517300/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a6b5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 15:27:23.427163   14720 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 15:27:23.442562   14720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 15:27:23.463047   14720 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0528 15:27:23.463129   14720 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0528 15:27:23.463129   14720 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0528 15:27:23.463129   14720 command_runner.go:130] >  kind: InitConfiguration
	I0528 15:27:23.463129   14720 command_runner.go:130] >  localAPIEndpoint:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -  advertiseAddress: 172.18.214.123
	I0528 15:27:23.463129   14720 command_runner.go:130] > +  advertiseAddress: 172.18.212.160
	I0528 15:27:23.463129   14720 command_runner.go:130] >    bindPort: 8443
	I0528 15:27:23.463129   14720 command_runner.go:130] >  bootstrapTokens:
	I0528 15:27:23.463129   14720 command_runner.go:130] >    - groups:
	I0528 15:27:23.463129   14720 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0528 15:27:23.463129   14720 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0528 15:27:23.463129   14720 command_runner.go:130] >    name: "multinode-517300"
	I0528 15:27:23.463129   14720 command_runner.go:130] >    kubeletExtraArgs:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -    node-ip: 172.18.214.123
	I0528 15:27:23.463129   14720 command_runner.go:130] > +    node-ip: 172.18.212.160
	I0528 15:27:23.463129   14720 command_runner.go:130] >    taints: []
	I0528 15:27:23.463129   14720 command_runner.go:130] >  ---
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0528 15:27:23.463129   14720 command_runner.go:130] >  kind: ClusterConfiguration
	I0528 15:27:23.463129   14720 command_runner.go:130] >  apiServer:
	I0528 15:27:23.463129   14720 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.18.214.123"]
	I0528 15:27:23.463129   14720 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	I0528 15:27:23.463129   14720 command_runner.go:130] >    extraArgs:
	I0528 15:27:23.463129   14720 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0528 15:27:23.463129   14720 command_runner.go:130] >  controllerManager:
	I0528 15:27:23.463129   14720 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.18.214.123
	+  advertiseAddress: 172.18.212.160
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-517300"
	   kubeletExtraArgs:
	-    node-ip: 172.18.214.123
	+    node-ip: 172.18.212.160
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.18.214.123"]
	+  certSANs: ["127.0.0.1", "localhost", "172.18.212.160"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0528 15:27:23.463682   14720 kubeadm.go:1154] stopping kube-system containers ...
	I0528 15:27:23.472836   14720 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 15:27:23.504241   14720 command_runner.go:130] > 46e6bf352b79
	I0528 15:27:23.504241   14720 command_runner.go:130] > 74720800aab4
	I0528 15:27:23.504241   14720 command_runner.go:130] > 879c412c7f45
	I0528 15:27:23.504241   14720 command_runner.go:130] > 666496aef7c5
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3799d49aa0d8
	I0528 15:27:23.504241   14720 command_runner.go:130] > ec6e090d8a58
	I0528 15:27:23.504241   14720 command_runner.go:130] > d1b2ab67f8e3
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3f800c86d1d4
	I0528 15:27:23.504241   14720 command_runner.go:130] > 1b70e592d21f
	I0528 15:27:23.504241   14720 command_runner.go:130] > b44c8d48ef93
	I0528 15:27:23.504241   14720 command_runner.go:130] > b21e4a41b2fe
	I0528 15:27:23.504241   14720 command_runner.go:130] > d79040b216c8
	I0528 15:27:23.504241   14720 command_runner.go:130] > 3b2820676d5c
	I0528 15:27:23.504241   14720 command_runner.go:130] > 453e3ca7e2c8
	I0528 15:27:23.504241   14720 command_runner.go:130] > 74bfa22b48f3
	I0528 15:27:23.504241   14720 command_runner.go:130] > de5c7dce3683
	I0528 15:27:23.504241   14720 docker.go:483] Stopping containers: [46e6bf352b79 74720800aab4 879c412c7f45 666496aef7c5 3799d49aa0d8 ec6e090d8a58 d1b2ab67f8e3 3f800c86d1d4 1b70e592d21f b44c8d48ef93 b21e4a41b2fe d79040b216c8 3b2820676d5c 453e3ca7e2c8 74bfa22b48f3 de5c7dce3683]
	I0528 15:27:23.514645   14720 ssh_runner.go:195] Run: docker stop 46e6bf352b79 74720800aab4 879c412c7f45 666496aef7c5 3799d49aa0d8 ec6e090d8a58 d1b2ab67f8e3 3f800c86d1d4 1b70e592d21f b44c8d48ef93 b21e4a41b2fe d79040b216c8 3b2820676d5c 453e3ca7e2c8 74bfa22b48f3 de5c7dce3683
	I0528 15:27:23.551125   14720 command_runner.go:130] > 46e6bf352b79
	I0528 15:27:23.551125   14720 command_runner.go:130] > 74720800aab4
	I0528 15:27:23.551125   14720 command_runner.go:130] > 879c412c7f45
	I0528 15:27:23.551125   14720 command_runner.go:130] > 666496aef7c5
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3799d49aa0d8
	I0528 15:27:23.551125   14720 command_runner.go:130] > ec6e090d8a58
	I0528 15:27:23.551125   14720 command_runner.go:130] > d1b2ab67f8e3
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3f800c86d1d4
	I0528 15:27:23.551125   14720 command_runner.go:130] > 1b70e592d21f
	I0528 15:27:23.551125   14720 command_runner.go:130] > b44c8d48ef93
	I0528 15:27:23.551125   14720 command_runner.go:130] > b21e4a41b2fe
	I0528 15:27:23.551125   14720 command_runner.go:130] > d79040b216c8
	I0528 15:27:23.551125   14720 command_runner.go:130] > 3b2820676d5c
	I0528 15:27:23.551125   14720 command_runner.go:130] > 453e3ca7e2c8
	I0528 15:27:23.551125   14720 command_runner.go:130] > 74bfa22b48f3
	I0528 15:27:23.551125   14720 command_runner.go:130] > de5c7dce3683
	I0528 15:27:23.565456   14720 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 15:27:23.612308   14720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0528 15:27:23.623269   14720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:27:23.623269   14720 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 15:27:23.623269   14720 kubeadm.go:156] found existing configuration files:
	
	I0528 15:27:23.645603   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 15:27:23.653486   14720 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:27:23.653486   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 15:27:23.677184   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 15:27:23.708181   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 15:27:23.725520   14720 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:27:23.725665   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 15:27:23.737893   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 15:27:23.770031   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 15:27:23.788741   14720 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:27:23.789765   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 15:27:23.802390   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 15:27:23.834836   14720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 15:27:23.851552   14720 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:27:23.852076   14720 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 15:27:23.866624   14720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 15:27:23.897304   14720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 15:27:23.917676   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:24.218032   14720 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 15:27:24.218120   14720 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0528 15:27:24.218198   14720 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0528 15:27:24.218198   14720 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0528 15:27:24.218259   14720 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0528 15:27:24.218259   14720 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0528 15:27:24.218335   14720 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 15:27:24.218405   14720 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 15:27:24.218405   14720 command_runner.go:130] > [certs] Using the existing "sa" key
	I0528 15:27:24.218496   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 15:27:25.446773   14720 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 15:27:25.450722   14720 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 15:27:25.450722   14720 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 15:27:25.450778   14720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2321667s)
	I0528 15:27:25.450817   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.745532   14720 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 15:27:25.745670   14720 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 15:27:25.745670   14720 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0528 15:27:25.746031   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 15:27:25.834542   14720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 15:27:25.834542   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:25.905559   14720 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 15:27:25.926493   14720 api_server.go:52] waiting for apiserver process to appear ...
	I0528 15:27:25.941311   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:26.459149   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:26.941768   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.451066   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.950978   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:27:27.974483   14720 command_runner.go:130] > 1882
	I0528 15:27:27.974720   14720 api_server.go:72] duration metric: took 2.0483137s to wait for apiserver process to appear ...
	I0528 15:27:27.974762   14720 api_server.go:88] waiting for apiserver healthz status ...
	I0528 15:27:27.974865   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.020687   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 15:27:31.024687   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 15:27:31.024687   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.077998   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 15:27:31.080483   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 15:27:31.476549   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.484913   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:31.484913   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:31.978263   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:31.986961   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:31.986961   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:32.491361   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:32.503272   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 15:27:32.503722   14720 api_server.go:103] status: https://172.18.212.160:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 15:27:32.984021   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:27:32.990646   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 200:
	ok
	I0528 15:27:32.993020   14720 round_trippers.go:463] GET https://172.18.212.160:8443/version
	I0528 15:27:32.993113   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:32.993113   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:32.993113   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:33.011505   14720 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0528 15:27:33.011505   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:33.011505   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Content-Length: 263
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:33 GMT
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Audit-Id: a1ba0a68-6bc5-4f4b-96e3-5dc461a2c929
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:33.011505   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:33.011505   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:33.011505   14720 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 15:27:33.012975   14720 api_server.go:141] control plane version: v1.30.1
	I0528 15:27:33.012975   14720 api_server.go:131] duration metric: took 5.0381973s to wait for apiserver health ...
	I0528 15:27:33.012975   14720 cni.go:84] Creating CNI manager for ""
	I0528 15:27:33.012975   14720 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 15:27:33.015254   14720 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 15:27:33.031290   14720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 15:27:33.035618   14720 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0528 15:27:33.035618   14720 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0528 15:27:33.035618   14720 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0528 15:27:33.035618   14720 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 15:27:33.035618   14720 command_runner.go:130] > Access: 2024-05-28 22:25:59.797580600 +0000
	I0528 15:27:33.040113   14720 command_runner.go:130] > Modify: 2024-05-22 23:10:00.000000000 +0000
	I0528 15:27:33.040113   14720 command_runner.go:130] > Change: 2024-05-28 22:25:48.111000000 +0000
	I0528 15:27:33.040208   14720 command_runner.go:130] >  Birth: -
	I0528 15:27:33.040208   14720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 15:27:33.040208   14720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 15:27:33.103901   14720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 15:27:34.073012   14720 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0528 15:27:34.073012   14720 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0528 15:27:34.075971   14720 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0528 15:27:34.075971   14720 command_runner.go:130] > daemonset.apps/kindnet configured
	I0528 15:27:34.076073   14720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 15:27:34.076073   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:27:34.076073   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.076073   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.076073   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.079570   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.079570   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.082329   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.082473   14720 round_trippers.go:580]     Audit-Id: 7e43b862-5a15-44d4-ac27-c8fe91d353f0
	I0528 15:27:34.082538   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.082538   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.082538   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.082538   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.084523   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87768 chars]
	I0528 15:27:34.091115   14720 system_pods.go:59] 12 kube-system pods found
	I0528 15:27:34.091115   14720 system_pods.go:61] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 15:27:34.091115   14720 system_pods.go:61] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:27:34.091115   14720 system_pods.go:61] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 15:27:34.091821   14720 system_pods.go:61] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:27:34.091821   14720 system_pods.go:61] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 15:27:34.091863   14720 system_pods.go:61] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:27:34.091863   14720 system_pods.go:74] duration metric: took 15.7899ms to wait for pod list to return data ...
	I0528 15:27:34.091911   14720 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:27:34.091945   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes
	I0528 15:27:34.091945   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.091945   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.091945   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.099817   14720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:27:34.099817   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Audit-Id: 1b9bbc58-d068-4e00-82da-468773958280
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.099817   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.099817   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.099817   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.100497   14720 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1803"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15629 chars]
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:27:34.102015   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:27:34.102015   14720 node_conditions.go:105] duration metric: took 10.0696ms to run NodePressure ...
	I0528 15:27:34.102015   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 15:27:34.466299   14720 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0528 15:27:34.466299   14720 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0528 15:27:34.466299   14720 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 15:27:34.466299   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0528 15:27:34.466299   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.466299   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.466299   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.469138   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:34.469138   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.469138   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Audit-Id: 425dd875-9bad-4303-a729-d306569c59c8
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.471744   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.471744   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.471744   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.472806   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1808"},"items":[{"metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1794","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0528 15:27:34.474547   14720 kubeadm.go:733] kubelet initialised
	I0528 15:27:34.474547   14720 kubeadm.go:734] duration metric: took 8.248ms waiting for restarted kubelet to initialise ...
	I0528 15:27:34.474547   14720 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:27:34.474547   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:27:34.474547   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.474547   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.474547   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.477838   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.477838   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.480585   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.480585   14720 round_trippers.go:580]     Audit-Id: 9dfe8821-f413-440e-8806-85297a862539
	I0528 15:27:34.480643   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.480643   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.480643   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.480643   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.483547   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1808"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87175 chars]
	I0528 15:27:34.491872   14720 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.491872   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:27:34.491872   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.491872   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.491872   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.492603   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.494508   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Audit-Id: 0e00f395-1f09-4893-935d-0a77e2cc4b01
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.494508   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.494508   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.494560   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.494560   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.494588   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:27:34.495166   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.495166   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.495214   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.495214   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.498889   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.498889   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.498889   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Audit-Id: d6e40a21-05c9-4f94-a4db-3ea77a13b5d2
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.498889   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.498889   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.499426   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.499996   14720 pod_ready.go:97] node "multinode-517300" hosting pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.500078   14720 pod_ready.go:81] duration metric: took 8.2057ms for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.500114   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.500146   14720 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.500284   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:27:34.500320   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.500353   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.500391   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.503216   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:34.503216   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.503216   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Audit-Id: f07b0c46-56b1-45e9-b2e6-e5e706aa183a
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.503216   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.503216   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.503811   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1794","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0528 15:27:34.503964   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.503964   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.503964   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.503964   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.505955   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:34.507537   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.507615   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.507638   14720 round_trippers.go:580]     Audit-Id: 4f86d285-800d-42f2-90c0-56eec9ef49d0
	I0528 15:27:34.507671   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.507671   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.507671   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.507671   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.507671   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.508247   14720 pod_ready.go:97] node "multinode-517300" hosting pod "etcd-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.508364   14720 pod_ready.go:81] duration metric: took 8.181ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.508419   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "etcd-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.508419   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.508588   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:27:34.508638   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.508638   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.508684   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.516923   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:34.516923   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.517269   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Audit-Id: 8dac5e8b-a81d-43a6-af3c-1685cdc1c78a
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.517269   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.517269   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.517481   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"ea194cef-cdf6-4ed7-b555-b57cc085b75b","resourceVersion":"1793","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.212.160:8443","kubernetes.io/config.hash":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.mirror":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.seen":"2024-05-28T22:27:25.898753713Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0528 15:27:34.517909   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.517909   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.517909   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.517909   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.518514   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.518514   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Audit-Id: 30bcf090-b451-455c-b6a3-0db724191a15
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.520747   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.520747   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.520829   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.520829   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.520895   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.520895   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-apiserver-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.521473   14720 pod_ready.go:81] duration metric: took 12.407ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.521473   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-apiserver-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.521473   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.521626   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:27:34.521626   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.521626   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.521626   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.525123   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:34.525209   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.525209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Audit-Id: 052548c1-7d80-4f34-ae11-5f349868ae53
	I0528 15:27:34.525256   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.525284   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.525284   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.525582   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"1791","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0528 15:27:34.526181   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:34.526181   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.526181   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.526181   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.530889   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:34.530978   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.530978   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Audit-Id: 7d3a93ad-7d79-4d81-9927-e05894911667
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.531010   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.531010   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.531433   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:34.531857   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-controller-manager-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.531857   14720 pod_ready.go:81] duration metric: took 10.3836ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:34.531857   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-controller-manager-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:34.531857   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.682530   14720 request.go:629] Waited for 150.3591ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:27:34.682717   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:27:34.682717   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.682717   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.682717   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.683424   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.688073   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.688073   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.688073   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.688073   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.688157   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.688157   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.688157   14720 round_trippers.go:580]     Audit-Id: e05260b3-db48-49db-9c10-416b56c14ef1
	I0528 15:27:34.688501   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ckbkv","generateName":"kube-proxy-","namespace":"kube-system","uid":"13ccca53-976a-4839-be94-d3960388ebfa","resourceVersion":"637","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0528 15:27:34.881799   14720 request.go:629] Waited for 192.2187ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:27:34.881971   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:27:34.881971   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:34.881971   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:34.881971   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:34.882517   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:34.882517   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Audit-Id: a857866b-1d02-4135-940e-0705630715ae
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:34.882517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:34.882517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:34.882517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:34 GMT
	I0528 15:27:34.886086   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"1701","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0528 15:27:34.886499   14720 pod_ready.go:92] pod "kube-proxy-ckbkv" in "kube-system" namespace has status "Ready":"True"
	I0528 15:27:34.886499   14720 pod_ready.go:81] duration metric: took 354.6408ms for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:34.886499   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.091130   14720 request.go:629] Waited for 204.4087ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:27:35.091300   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:27:35.091300   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.091401   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.091401   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.092060   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.092060   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Audit-Id: 43e63484-78fd-4c48-adcc-f219ff5ea848
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.092060   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.092060   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.092060   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.096018   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"1805","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0528 15:27:35.277694   14720 request.go:629] Waited for 180.6744ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:35.277790   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:35.277790   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.277790   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.277865   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.278464   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.282303   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.282303   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Audit-Id: 70336e55-252b-4706-bf32-560d4480ee8e
	I0528 15:27:35.282384   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.282501   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.282559   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.282559   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:35.283327   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-proxy-hbbq2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:35.283327   14720 pod_ready.go:81] duration metric: took 396.8268ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:35.283327   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-proxy-hbbq2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:35.283327   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.487350   14720 request.go:629] Waited for 203.8273ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:27:35.487450   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:27:35.487450   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.487450   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.487602   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.487879   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.491109   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Audit-Id: 217fe55e-412c-4811-82dc-f08dd4114e2a
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.491192   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.491192   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.491192   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.491192   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfckx","generateName":"kube-proxy-","namespace":"kube-system","uid":"879a98da-431d-4c4d-878a-1c5d63948d55","resourceVersion":"1760","creationTimestamp":"2024-05-28T22:12:11Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:12:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0528 15:27:35.680952   14720 request.go:629] Waited for 188.7381ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:27:35.680952   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:27:35.681147   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.681147   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.681147   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.681782   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:35.686835   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Audit-Id: a015d3b0-97ab-485c-954a-ed77b00c30ab
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.686835   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.686835   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.686835   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.687150   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m03","uid":"d1ead7ca-233f-4d89-9e59-97f5e8e47155","resourceVersion":"1772","creationTimestamp":"2024-05-28T22:23:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_23_04_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0528 15:27:35.687661   14720 pod_ready.go:97] node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:27:35.687719   14720 pod_ready.go:81] duration metric: took 404.3907ms for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:35.687777   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:27:35.687777   14720 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:27:35.891425   14720 request.go:629] Waited for 203.1146ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:27:35.891425   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:27:35.891425   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:35.891425   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:35.891425   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:35.895423   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:35.895423   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Audit-Id: 32cce0c2-8ed4-4c84-990e-16605422b534
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:35.896844   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:35.896946   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:35.896946   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:35.896985   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:35 GMT
	I0528 15:27:35.897039   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"1787","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0528 15:27:36.091033   14720 request.go:629] Waited for 192.9616ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.091098   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.091098   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.091098   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.091098   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.091846   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:36.091846   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.091846   14720 round_trippers.go:580]     Audit-Id: 2e79c9ef-a6e1-4792-8ae5-59f847468f46
	I0528 15:27:36.091846   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.095000   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.095000   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.095000   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.095051   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.095459   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:36.095593   14720 pod_ready.go:97] node "multinode-517300" hosting pod "kube-scheduler-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:36.095593   14720 pod_ready.go:81] duration metric: took 407.8151ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	E0528 15:27:36.095593   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300" hosting pod "kube-scheduler-multinode-517300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300" has status "Ready":"False"
	I0528 15:27:36.095593   14720 pod_ready.go:38] duration metric: took 1.6210417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:27:36.095593   14720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 15:27:36.132156   14720 command_runner.go:130] > -16
	I0528 15:27:36.132402   14720 ops.go:34] apiserver oom_adj: -16
	I0528 15:27:36.132493   14720 kubeadm.go:591] duration metric: took 12.7560452s to restartPrimaryControlPlane
	I0528 15:27:36.132517   14720 kubeadm.go:393] duration metric: took 12.8204697s to StartCluster
	I0528 15:27:36.132517   14720 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:36.132517   14720 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 15:27:36.134100   14720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 15:27:36.135998   14720 start.go:234] Will wait 6m0s for node &{Name: IP:172.18.212.160 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 15:27:36.136047   14720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 15:27:36.139721   14720 out.go:177] * Verifying Kubernetes components...
	I0528 15:27:36.144201   14720 out.go:177] * Enabled addons: 
	I0528 15:27:36.136539   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:27:36.149069   14720 addons.go:510] duration metric: took 13.0711ms for enable addons: enabled=[]
	I0528 15:27:36.163352   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:27:36.410324   14720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 15:27:36.442287   14720 node_ready.go:35] waiting up to 6m0s for node "multinode-517300" to be "Ready" ...
	I0528 15:27:36.442287   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.442287   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.442287   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.442287   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.443552   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:36.443552   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.443552   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.443552   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.447224   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Audit-Id: 251a79b3-0490-4aa7-8828-b37bb86532d3
	I0528 15:27:36.447224   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.447647   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:36.959190   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:36.959270   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:36.959311   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:36.959311   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:36.967674   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:36.967713   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Audit-Id: 715c32d9-3eb4-4b8f-84ee-2609f2c271d8
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:36.967713   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:36.967713   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:36.967713   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:36 GMT
	I0528 15:27:36.967713   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:37.454647   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:37.454718   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:37.454929   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:37.454997   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:37.458638   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:37.458748   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:37.458748   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:37.458748   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:37 GMT
	I0528 15:27:37.458748   14720 round_trippers.go:580]     Audit-Id: 197fa7cf-8751-45c3-b791-e27c84bed472
	I0528 15:27:37.458748   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:37.954327   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:37.954565   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:37.954565   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:37.954565   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:37.955054   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:37.958504   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:37 GMT
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Audit-Id: ef53bf37-ab00-4d6c-a8bf-c5e1569d7a10
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:37.958504   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:37.958504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:37.958504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:37.959976   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:38.454800   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:38.454903   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:38.454903   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:38.454963   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:38.455254   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:38.455254   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:38.455254   14720 round_trippers.go:580]     Audit-Id: a85fa80e-9572-4532-bc99-b09f17f78555
	I0528 15:27:38.455254   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:38.460129   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:38.460129   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:38.460129   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:38.460129   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:38 GMT
	I0528 15:27:38.461392   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:38.461915   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:38.951753   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:38.951753   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:38.951753   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:38.951753   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:38.952517   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:38.952517   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:38.952517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:38.952517   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:38.952517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:38 GMT
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Audit-Id: 848b187b-fb14-49fa-ab39-aea8bc8ba119
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:38.956207   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:38.956456   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:39.467133   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:39.467487   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:39.467487   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:39.467487   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:39.472197   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:39.472261   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Audit-Id: bca7dccc-9f77-4200-a77e-c436513dc483
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:39.472261   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:39.472261   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:39.472261   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:39 GMT
	I0528 15:27:39.473011   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:39.953959   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:39.954034   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:39.954034   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:39.954123   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:39.960527   14720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:27:39.960527   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Audit-Id: 451e32f7-02ab-4014-b169-e5e590eaa1d1
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:39.960527   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:39.960527   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:39.960527   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:39 GMT
	I0528 15:27:39.960527   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.443390   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:40.443453   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:40.443453   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:40.443453   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:40.448128   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:40.451281   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:40.451281   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:40.451281   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:40 GMT
	I0528 15:27:40.451281   14720 round_trippers.go:580]     Audit-Id: 7ae23976-d1e3-4003-b671-71e5f0e83bb2
	I0528 15:27:40.451424   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.949349   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:40.949411   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:40.949411   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:40.949411   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:40.953215   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:40.953215   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:40.953215   14720 round_trippers.go:580]     Audit-Id: f6cd5edf-29fa-4509-8498-9fba1d6dfa5e
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:40.953310   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:40.953310   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:40.953310   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:40 GMT
	I0528 15:27:40.953451   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:40.953451   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:41.455482   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:41.455482   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:41.455641   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:41.455641   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:41.460644   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:27:41.460644   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:41.460644   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:41 GMT
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Audit-Id: 0a4924fa-0d92-4b7d-a366-9bbea88d5abd
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:41.460644   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:41.460644   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:41.460644   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:41.955971   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:41.955971   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:41.955971   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:41.955971   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:41.956508   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:41.959364   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:41.959364   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:41 GMT
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Audit-Id: a7994bd2-2220-4d48-b85f-52e33bbb7656
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:41.959364   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:41.959364   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:41.959988   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.456382   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:42.456473   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:42.456473   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:42.456473   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:42.456750   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:42.460741   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:42.460741   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:42 GMT
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Audit-Id: 593ac8f7-465d-4c88-a5e2-7320dcf1b6ad
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:42.460741   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:42.460741   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:42.460741   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.954889   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:42.954889   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:42.954994   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:42.954994   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:42.960774   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:27:42.960774   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:42.960774   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:42.960774   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:42 GMT
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Audit-Id: 3c60242a-14db-4e74-a7b7-6e420360f17f
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:42.960774   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:42.961313   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:42.961451   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:43.445242   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:43.445320   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:43.445320   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:43.445412   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:43.450069   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:43.450069   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:43.450069   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:43.450069   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:43 GMT
	I0528 15:27:43.450069   14720 round_trippers.go:580]     Audit-Id: dedf0da1-23d5-46f2-9706-fdca02c2768c
	I0528 15:27:43.450069   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1783","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0528 15:27:43.961409   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:43.961409   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:43.961409   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:43.961409   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:43.961957   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:43.965134   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:43.965134   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:43 GMT
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Audit-Id: 47fa7c0f-4bf6-4765-b854-b30d4518ad10
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:43.965134   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:43.965134   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:43.965458   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:44.444390   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:44.444390   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:44.444390   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:44.444390   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:44.445122   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:44.445122   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:44.452322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:44.452322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:44 GMT
	I0528 15:27:44.452322   14720 round_trippers.go:580]     Audit-Id: a7785b18-9084-4d27-a934-10119e682580
	I0528 15:27:44.452439   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:44.452894   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:44.952409   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:44.952479   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:44.952479   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:44.952479   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:44.952799   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:44.956600   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Audit-Id: 940fe5f9-cacb-4703-b89e-dd975cc01c8a
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:44.956600   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:44.956600   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:44.956600   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:44 GMT
	I0528 15:27:44.956600   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:45.453212   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:45.453410   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:45.453410   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:45.453410   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:45.457727   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:45.457727   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:45.457727   14720 round_trippers.go:580]     Audit-Id: b0af8f69-d04c-4264-97e9-536e12f6cf0b
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:45.457814   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:45.457814   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:45.457814   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:45 GMT
	I0528 15:27:45.457883   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:45.458621   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:45.950165   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:45.950165   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:45.950165   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:45.950165   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:45.954638   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:45.954638   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:45.954638   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:45.954638   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:45 GMT
	I0528 15:27:45.954638   14720 round_trippers.go:580]     Audit-Id: caf3eef1-6d17-42b7-bd41-6e1a38864014
	I0528 15:27:45.954638   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:46.451742   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:46.451742   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:46.451742   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:46.451742   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:46.459797   14720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0528 15:27:46.459797   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:46 GMT
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Audit-Id: b3534156-99fd-4711-bb97-806a2ce8b8fa
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:46.459797   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:46.459797   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:46.459797   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:46.459797   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:46.957621   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:46.957819   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:46.957819   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:46.957819   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:46.960880   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:27:46.960880   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:46.960880   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:46.960880   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:46 GMT
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Audit-Id: 68e67770-7e92-4dff-9be3-3c5353df8267
	I0528 15:27:46.960880   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:46.960880   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:47.457770   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:47.457862   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:47.457862   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:47.457862   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:47.458195   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:47.458195   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:47.458195   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:47.458195   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:47 GMT
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Audit-Id: aba32545-5831-46ed-92b8-de72a6a9ae67
	I0528 15:27:47.458195   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:47.462414   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:47.462861   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:47.952961   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:47.953052   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:47.953052   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:47.953052   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:47.953650   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:47.957838   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:47.957838   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:47.957838   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:47 GMT
	I0528 15:27:47.957838   14720 round_trippers.go:580]     Audit-Id: 64543aac-6a4a-4b62-9d0c-b7adf9a3605e
	I0528 15:27:47.958024   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:48.451794   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:48.451794   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:48.451794   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:48.451794   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:48.452321   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:48.456187   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:48.456187   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:48.456187   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:48 GMT
	I0528 15:27:48.456187   14720 round_trippers.go:580]     Audit-Id: b1ef798d-6e1c-4781-91fd-a01259334926
	I0528 15:27:48.456187   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:48.944576   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:48.944883   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:48.944999   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:48.945105   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:48.945906   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:48.949006   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:48.949101   14720 round_trippers.go:580]     Audit-Id: 1858afbe-a696-4a68-88fd-b8463dfb8c44
	I0528 15:27:48.949101   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:48.949171   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:48.949171   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:48.949171   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:48.949171   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:48 GMT
	I0528 15:27:48.949171   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.451885   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:49.451945   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:49.451981   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:49.451981   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:49.452288   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:49.452288   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:49.452288   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:49 GMT
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Audit-Id: 983131a5-6beb-4aee-8986-081f55b3c820
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:49.452288   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:49.456267   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:49.456653   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.949600   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:49.949600   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:49.949685   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:49.949685   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:49.950495   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:49.953421   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:49.953564   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:49.953564   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:49 GMT
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Audit-Id: 74e6d44f-1637-484c-b0d0-142ed653e212
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:49.953603   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:49.953787   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:49.954386   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:50.452543   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:50.452543   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:50.452543   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:50.452543   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:50.453349   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:50.456712   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:50.456712   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:50.456712   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:50 GMT
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Audit-Id: ee19d1a0-0e8b-41cc-a868-3bb152118c1f
	I0528 15:27:50.456712   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:50.456712   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:50.949352   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:50.949352   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:50.949352   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:50.949352   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:50.953504   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:50.953504   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:50 GMT
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Audit-Id: f8f5c1e5-40aa-42ae-8698-6c1f106e1202
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:50.953504   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:50.953504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:50.953504   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:50.953504   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.444558   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:51.444898   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:51.444955   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:51.444955   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:51.446078   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:51.448980   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Audit-Id: 77aedf58-1dcf-479e-a1d2-eb2207d585de
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:51.448980   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:51.448980   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:51.448980   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:51 GMT
	I0528 15:27:51.449246   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.952091   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:51.952159   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:51.952159   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:51.952159   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:51.956323   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:51.956323   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:51.956323   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:51 GMT
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Audit-Id: 254d5dda-6b73-47c8-833c-b91dd1b0e27f
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:51.956323   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:51.956323   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:51.956323   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:51.957186   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:52.456977   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:52.457058   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:52.457058   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:52.457058   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:52.461510   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:52.461510   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:52.461510   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:52.461510   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:52 GMT
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Audit-Id: 399b9815-96f6-4a68-996f-a9abf2f53511
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:52.461510   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:52.461510   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:52.956024   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:52.956024   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:52.956024   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:52.956024   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:52.959953   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:52.960046   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:52.960046   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:52.960146   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:52.960203   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:52.960203   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:52.960203   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:52 GMT
	I0528 15:27:52.960203   14720 round_trippers.go:580]     Audit-Id: 3c7c1f4e-b7d9-4934-97a3-2f3f41575b25
	I0528 15:27:52.960203   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:53.456156   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:53.456156   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:53.456156   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:53.456156   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:53.456688   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:53.456688   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:53 GMT
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Audit-Id: 9cb106e2-c669-49f9-a5ed-562f337952e5
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:53.456688   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:53.460955   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:53.460986   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:53.461395   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:53.945172   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:53.945239   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:53.945239   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:53.945239   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:53.945586   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:53.945586   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:53.945586   14720 round_trippers.go:580]     Audit-Id: 3a86c8d5-6cd1-42e6-b977-ff6b47529001
	I0528 15:27:53.945586   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:53.949291   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:53.949291   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:53.949291   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:53.949291   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:53 GMT
	I0528 15:27:53.949520   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:54.450825   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:54.450985   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:54.450985   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:54.450985   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:54.453857   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:27:54.453857   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:54.453857   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:54.453857   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:54.453857   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:54.455692   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:54.455692   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:54 GMT
	I0528 15:27:54.455692   14720 round_trippers.go:580]     Audit-Id: 32e727ab-8066-4902-ac9f-c93e9ff7b8fe
	I0528 15:27:54.455908   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:54.456733   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:54.945340   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:54.945427   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:54.945427   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:54.945427   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:54.946220   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:54.946220   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:54.949325   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:54 GMT
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Audit-Id: d559e8d4-5992-4df2-8c40-71c9b1e93552
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:54.949325   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:54.949325   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:54.949498   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:55.456533   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:55.456533   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:55.456533   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:55.456533   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:55.457286   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:55.457286   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:55.457286   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:55.457286   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:55 GMT
	I0528 15:27:55.461209   14720 round_trippers.go:580]     Audit-Id: fce34905-dd01-4aa0-8892-91e2c7e307dd
	I0528 15:27:55.461209   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:55.461272   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:55.461272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:55.461722   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:55.944796   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:55.944867   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:55.944867   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:55.944867   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:55.945586   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:55.945586   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Audit-Id: f9acd967-ff75-48a3-8bbd-0f66304fe0da
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:55.945586   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:55.945586   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:55.948794   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:55.948794   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:55 GMT
	I0528 15:27:55.949041   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:56.456221   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:56.456221   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:56.456221   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:56.456221   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:56.459611   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:56.459777   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Audit-Id: 17e0f196-8b0c-4c47-95ae-3ec8b14191d3
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:56.459845   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:56.459845   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:56.459845   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:56 GMT
	I0528 15:27:56.459845   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:56.460919   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:56.958243   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:56.958243   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:56.958243   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:56.958243   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:56.959385   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:27:56.962080   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:56 GMT
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Audit-Id: 4942531e-a065-4650-8e32-ccefc8c2b682
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:56.962080   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:56.962080   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:56.962080   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:56.962383   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:57.447919   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:57.447919   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:57.447919   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:57.447919   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:57.452666   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:27:57.452697   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Audit-Id: 7dbd79e3-a287-46fc-b0ba-765326d0ddde
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:57.452697   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:57.452697   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:57.452697   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:57 GMT
	I0528 15:27:57.452697   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:57.955696   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:57.955941   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:57.956079   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:57.956079   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:57.956958   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:57.956958   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:57.956958   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:57.956958   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:57 GMT
	I0528 15:27:57.956958   14720 round_trippers.go:580]     Audit-Id: c89cb395-be00-4e01-9545-606f77dc14ff
	I0528 15:27:57.961386   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:57.961906   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.454144   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:58.454236   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:58.454236   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:58.454236   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:58.454546   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:58.458691   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:58.458691   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:58 GMT
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Audit-Id: e0d78bb5-0f9b-4611-ad0a-0fdb43c1076e
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:58.458691   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:58.458691   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:58.459024   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.948304   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:58.948304   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:58.948547   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:58.948547   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:58.948824   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:58.948824   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:58.948824   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:58.948824   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:58 GMT
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Audit-Id: b4679ff2-4caa-4b2e-a1b0-ac87e62994d4
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:58.952284   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:58.952480   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:58.953005   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:27:59.456231   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:59.456384   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:59.456384   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:59.456494   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:59.456768   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:59.456768   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:59.460551   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:59.460551   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:59 GMT
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Audit-Id: d3ed8cff-1567-4b14-9d6f-53fb84589e9f
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:59.460551   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:59.460886   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:27:59.946828   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:27:59.946828   14720 round_trippers.go:469] Request Headers:
	I0528 15:27:59.946828   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:27:59.946828   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:27:59.947397   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:27:59.951119   14720 round_trippers.go:577] Response Headers:
	I0528 15:27:59.951119   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:27:59.951119   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:27:59.951119   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:27:59 GMT
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Audit-Id: c876420b-650e-4b1f-b0ea-da58d00905d7
	I0528 15:27:59.951226   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:27:59.951346   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.451652   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:00.451652   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:00.451652   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:00.451652   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:00.456929   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:00.456929   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:00.456929   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:00.456929   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:00 GMT
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Audit-Id: fe470701-d583-4056-b922-4005c0b4b8a7
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:00.456929   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:00.456929   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.952828   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:00.952828   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:00.952828   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:00.952828   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:00.957154   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:00.957277   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Audit-Id: ddd02c78-f652-40d2-b88c-f2ef1685cba9
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:00.957277   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:00.957277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:00.957277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:00.957358   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:00 GMT
	I0528 15:28:00.957717   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:00.958907   14720 node_ready.go:53] node "multinode-517300" has status "Ready":"False"
	I0528 15:28:01.454700   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.454700   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.454700   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.454700   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.455337   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:01.458486   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Audit-Id: 025b0102-bd09-4348-9e38-89c1212c4c40
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.458486   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.458486   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.458486   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.458789   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1901","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0528 15:28:01.957545   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.957545   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.957545   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.957545   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.958107   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:01.958107   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.958107   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.961347   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Audit-Id: 93bd0d24-3068-4c24-a48c-291553f8eaeb
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.961347   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.961602   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:01.962322   14720 node_ready.go:49] node "multinode-517300" has status "Ready":"True"
	I0528 15:28:01.962322   14720 node_ready.go:38] duration metric: took 25.5199538s for node "multinode-517300" to be "Ready" ...
	I0528 15:28:01.962426   14720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:28:01.962495   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:01.962495   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.962495   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.962495   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.970272   14720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 15:28:01.970272   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.970272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.970272   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Audit-Id: 87394050-2e8d-4c4a-bf35-09b2561ef264
	I0528 15:28:01.970272   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.972381   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1930"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86201 chars]
	I0528 15:28:01.976024   14720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:01.976024   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:01.976024   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.976024   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.976024   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.977711   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:01.977711   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.977711   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.977711   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.977711   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Audit-Id: 31f320bd-bbac-4c19-97c2-ba274c967dde
	I0528 15:28:01.979308   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.979587   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:01.980306   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:01.980306   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:01.980306   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:01.980360   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:01.983061   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:01.983061   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:01.983061   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:01.983061   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:01 GMT
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Audit-Id: 7fe11c65-72a8-4c24-801c-c8e65c621d0d
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:01.983364   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:01.983756   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:02.489129   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:02.489217   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.489217   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.489217   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.495307   14720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 15:28:02.495540   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.495540   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.495540   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.495540   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.495540   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.495618   14720 round_trippers.go:580]     Audit-Id: 3e7c25a2-6d4c-4e0e-973b-76acb53d1095
	I0528 15:28:02.495618   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.495768   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:02.496662   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:02.496662   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.496662   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.496662   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.499793   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:02.499793   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Audit-Id: b9a24684-2f3b-43d2-b4dd-0ed26b942271
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.499793   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.499793   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.499793   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.499793   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:02.982747   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:02.982849   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.982849   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.982849   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.983158   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:02.987363   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.987363   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.987537   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Audit-Id: ee5483bf-0062-47a7-b92a-8ae7f98fd5fd
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.987537   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.988250   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:02.991204   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:02.991204   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:02.991204   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:02.991204   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:02.994582   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:02.994582   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Audit-Id: 1d082699-9f1d-4d18-a6ac-100679ddfde1
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:02.994686   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:02.994686   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:02.994686   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:02 GMT
	I0528 15:28:02.994832   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:03.483838   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:03.483838   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.483838   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.483838   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.484498   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:03.488223   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Audit-Id: 2a15be28-c745-4cfd-a30a-90ee85f9370b
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.488223   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.488223   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.488223   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.488223   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:03.489486   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:03.489486   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.489486   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.489486   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.494818   14720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 15:28:03.494818   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Audit-Id: 5edd5cc1-f3a8-4e76-8f39-a75718835493
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.494818   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.494818   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.494818   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.495538   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1930","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0528 15:28:03.982209   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:03.982306   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.982306   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.982306   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.984248   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:03.984248   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.984248   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.984248   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Audit-Id: 0fa3868f-18de-4ffb-8bf3-40a96aa8f058
	I0528 15:28:03.984248   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.984248   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:03.988246   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:03.988307   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:03.988307   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:03.988307   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:03.991194   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:03.991833   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:03.991833   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:03.991860   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:03.991860   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:03 GMT
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Audit-Id: 736907c9-4aef-421f-9741-a96dd58916b2
	I0528 15:28:03.991860   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:03.991860   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:03.992731   14720 pod_ready.go:102] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"False"
	I0528 15:28:04.486986   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:04.487057   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.487057   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.487142   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.489207   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:04.490447   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.490526   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.490526   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.490555   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.490555   14720 round_trippers.go:580]     Audit-Id: 5068e956-fa86-4dba-8763-97012d1e7e26
	I0528 15:28:04.491622   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.491622   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.492158   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1799","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0528 15:28:04.495241   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:04.495241   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.495241   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.495241   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.499221   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:04.499221   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Audit-Id: 62578849-7a3c-4767-b721-f3c3f77a2112
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.499221   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.499221   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.499221   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.499928   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:04.991384   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:04.991384   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.991384   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.991384   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.995381   14720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 15:28:04.995753   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Audit-Id: c75d45ee-f3d4-48a9-9d04-b5e3c72e80be
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.995753   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:04.995753   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.995753   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.995753   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:04.996505   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:04.996505   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:04.996505   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:04.997081   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:04.997840   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:04.997840   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:04.997840   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:04 GMT
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Audit-Id: 05f655ee-1e3a-44bb-895e-adbd613db33a
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:04.997840   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:04.997840   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.004152   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:05.488639   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:05.488639   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.488639   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.488639   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.493306   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:05.493306   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.493306   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.493306   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.493306   14720 round_trippers.go:580]     Audit-Id: 3eb465ce-f8f9-4708-a76a-ea583b781954
	I0528 15:28:05.493306   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:05.494503   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:05.494605   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.494605   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.494605   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.497854   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:05.497927   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.497927   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.497927   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Audit-Id: 6cc7430b-128a-4666-9b0b-ebf414bb0c27
	I0528 15:28:05.497927   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.497927   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:05.976975   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:05.976975   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.977067   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.977067   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.978533   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:05.982229   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Audit-Id: 70187c41-5d68-4e8c-bcbb-b498ded7633d
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.982277   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.982277   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.982322   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.982489   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1948","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0528 15:28:05.983324   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:05.983355   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:05.983397   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:05.983397   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:05.987821   14720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 15:28:05.987821   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Audit-Id: 20616c10-8486-4bb2-b6da-eedecb2ab45f
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:05.987821   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:05.987821   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:05.987821   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:05 GMT
	I0528 15:28:05.988352   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.481749   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p9vb9
	I0528 15:28:06.481749   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.481749   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.481749   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.482461   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.487112   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Audit-Id: 7a2ce5a6-20ec-4494-98f9-9373ad42872c
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.487112   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.487112   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.487220   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.487220   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.487636   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0528 15:28:06.488361   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.488361   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.488361   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.488361   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.491598   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.491598   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.491598   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.491598   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.491598   14720 round_trippers.go:580]     Audit-Id: 45df517a-f3d4-46c1-8f8a-483ad7c50645
	I0528 15:28:06.491598   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.492218   14720 pod_ready.go:92] pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.492218   14720 pod_ready.go:81] duration metric: took 4.5161786s for pod "coredns-7db6d8ff4d-p9vb9" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.492218   14720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.492218   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-517300
	I0528 15:28:06.492218   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.492218   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.492218   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.493424   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:06.493424   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.493424   14720 round_trippers.go:580]     Audit-Id: 8fba314b-f414-489d-bd2b-71a457d11a03
	I0528 15:28:06.493424   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.496181   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.496181   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.496181   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.496181   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.496259   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-517300","namespace":"kube-system","uid":"8e99dbbd-d8ae-4164-8087-9d0d43ebec00","resourceVersion":"1909","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.18.212.160:2379","kubernetes.io/config.hash":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.mirror":"29a3c98730324ed6fda87b8b55b4c17b","kubernetes.io/config.seen":"2024-05-28T22:27:25.948291596Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0528 15:28:06.496861   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.496861   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.496861   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.497076   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.498816   14720 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 15:28:06.500372   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.500372   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.500372   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Audit-Id: 41f8a032-b820-4712-bf27-88390bc28431
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.500372   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.500372   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.500994   14720 pod_ready.go:92] pod "etcd-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.500994   14720 pod_ready.go:81] duration metric: took 8.776ms for pod "etcd-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.500994   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.500994   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-517300
	I0528 15:28:06.500994   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.500994   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.500994   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.503517   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.503517   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Audit-Id: 1b2b8250-37bb-424c-a681-d95953edc28a
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.503517   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.504301   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.504301   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.505183   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-517300","namespace":"kube-system","uid":"ea194cef-cdf6-4ed7-b555-b57cc085b75b","resourceVersion":"1875","creationTimestamp":"2024-05-28T22:27:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.18.212.160:8443","kubernetes.io/config.hash":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.mirror":"1e42a0eced136b8233df0d839e9a8652","kubernetes.io/config.seen":"2024-05-28T22:27:25.898753713Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:27:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0528 15:28:06.505795   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.505898   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.505898   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.505898   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.508209   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.508209   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.508209   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.508209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.508209   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.508549   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.508549   14720 round_trippers.go:580]     Audit-Id: 2027147e-22f9-4e66-a787-e0e9b2592817
	I0528 15:28:06.508667   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.508855   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.509386   14720 pod_ready.go:92] pod "kube-apiserver-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.509386   14720 pod_ready.go:81] duration metric: took 8.3922ms for pod "kube-apiserver-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.509386   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.509584   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-517300
	I0528 15:28:06.509584   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.509584   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.509584   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.510249   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.512542   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.512542   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Audit-Id: 8b24dd10-3675-4e12-99bf-9b60631fa286
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.512542   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.512542   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.512542   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-517300","namespace":"kube-system","uid":"8de5dc69-a626-438c-8f8e-1542254825fb","resourceVersion":"1884","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.mirror":"26672dc0ef1c19cfbeb08736c0b97223","kubernetes.io/config.seen":"2024-05-28T22:04:18.078856789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0528 15:28:06.513286   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.513286   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.513286   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.513286   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.516479   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.516479   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.516479   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.516479   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Audit-Id: 931624ed-d1a5-4639-83e3-54f88d9a8ade
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.516479   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.516479   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.517246   14720 pod_ready.go:92] pod "kube-controller-manager-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.517246   14720 pod_ready.go:81] duration metric: took 7.8599ms for pod "kube-controller-manager-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.517246   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.517246   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ckbkv
	I0528 15:28:06.517246   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.517246   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.517246   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.519807   14720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 15:28:06.520151   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.520151   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Audit-Id: 064fb44e-0ef8-4585-9835-47707012d9e5
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.520151   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.520151   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.520151   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ckbkv","generateName":"kube-proxy-","namespace":"kube-system","uid":"13ccca53-976a-4839-be94-d3960388ebfa","resourceVersion":"637","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0528 15:28:06.520925   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m02
	I0528 15:28:06.520925   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.520925   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.520925   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.523877   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.523877   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Audit-Id: 2c3704d8-7e60-4ad8-a57c-8308e8cdd642
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.524022   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.524022   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.524022   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.524022   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m02","uid":"4ff35487-ffa9-4e96-b7b9-c6f33c06ac46","resourceVersion":"1701","creationTimestamp":"2024-05-28T22:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_07_27_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:07:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3826 chars]
	I0528 15:28:06.525492   14720 pod_ready.go:92] pod "kube-proxy-ckbkv" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.525492   14720 pod_ready.go:81] duration metric: took 8.2459ms for pod "kube-proxy-ckbkv" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.525492   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.691179   14720 request.go:629] Waited for 165.4513ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:28:06.691268   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hbbq2
	I0528 15:28:06.691268   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.691268   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.691268   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.691814   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.694869   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.694869   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Audit-Id: 08fae60a-143b-4fff-a79f-9c9e48d62604
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.694869   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.694869   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.695000   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hbbq2","generateName":"kube-proxy-","namespace":"kube-system","uid":"3d8065a2-e1bb-4cf6-837a-f63ed0718835","resourceVersion":"1805","creationTimestamp":"2024-05-28T22:04:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0528 15:28:06.884532   14720 request.go:629] Waited for 188.5758ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.884669   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:06.884669   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:06.884669   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:06.884669   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:06.885096   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:06.888356   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:06.888356   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:06.888356   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:06 GMT
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Audit-Id: 50ba4903-cae3-4d8c-b4db-03d93d859617
	I0528 15:28:06.888356   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:06.888599   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:06.889278   14720 pod_ready.go:92] pod "kube-proxy-hbbq2" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:06.889278   14720 pod_ready.go:81] duration metric: took 363.7854ms for pod "kube-proxy-hbbq2" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:06.889278   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.085806   14720 request.go:629] Waited for 196.527ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:28:07.086192   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfckx
	I0528 15:28:07.086192   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.086192   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.086192   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.087022   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.087022   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Audit-Id: 22a078f0-8a82-48c5-a0c5-9acb3ff273f3
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.091205   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.091205   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.091205   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.091413   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jfckx","generateName":"kube-proxy-","namespace":"kube-system","uid":"879a98da-431d-4c4d-878a-1c5d63948d55","resourceVersion":"1760","creationTimestamp":"2024-05-28T22:12:11Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1b969d95-d2e7-4f1f-9dfe-2e267564d504","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:12:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1b969d95-d2e7-4f1f-9dfe-2e267564d504\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6062 chars]
	I0528 15:28:07.287982   14720 request.go:629] Waited for 195.6903ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:28:07.287982   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300-m03
	I0528 15:28:07.288281   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.288281   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.288353   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.288809   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.292040   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.292040   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.292040   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.292040   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.292335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.292385   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.292385   14720 round_trippers.go:580]     Audit-Id: 2d994222-dfe5-471b-b906-f74889484a1b
	I0528 15:28:07.292720   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300-m03","uid":"d1ead7ca-233f-4d89-9e59-97f5e8e47155","resourceVersion":"1923","creationTimestamp":"2024-05-28T22:23:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_28T15_23_04_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:23:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4399 chars]
	I0528 15:28:07.293776   14720 pod_ready.go:97] node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:28:07.293869   14720 pod_ready.go:81] duration metric: took 404.5891ms for pod "kube-proxy-jfckx" in "kube-system" namespace to be "Ready" ...
	E0528 15:28:07.293869   14720 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-517300-m03" hosting pod "kube-proxy-jfckx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-517300-m03" has status "Ready":"Unknown"
	I0528 15:28:07.293869   14720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.492874   14720 request.go:629] Waited for 198.722ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:28:07.493329   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-517300
	I0528 15:28:07.493372   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.493372   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.493372   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.494335   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.494335   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.494335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.494335   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Audit-Id: 5058c18a-e06c-408b-840f-69a7e250dbca
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.494335   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.498494   14720 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-517300","namespace":"kube-system","uid":"2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6","resourceVersion":"1904","creationTimestamp":"2024-05-28T22:04:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.mirror":"84e130f816e1eba82686beb351d9b34a","kubernetes.io/config.seen":"2024-05-28T22:04:18.078857889Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0528 15:28:07.682103   14720 request.go:629] Waited for 182.4692ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:07.682296   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes/multinode-517300
	I0528 15:28:07.682296   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.682296   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.682372   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.682731   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.686304   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.686304   14720 round_trippers.go:580]     Audit-Id: ee2a1c21-53ec-4c0f-a854-afa633edfa9d
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.686422   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.686422   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.686422   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.686551   14720 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-28T22:04:14Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0528 15:28:07.687231   14720 pod_ready.go:92] pod "kube-scheduler-multinode-517300" in "kube-system" namespace has status "Ready":"True"
	I0528 15:28:07.687340   14720 pod_ready.go:81] duration metric: took 393.2139ms for pod "kube-scheduler-multinode-517300" in "kube-system" namespace to be "Ready" ...
	I0528 15:28:07.687340   14720 pod_ready.go:38] duration metric: took 5.7248958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 15:28:07.687340   14720 api_server.go:52] waiting for apiserver process to appear ...
	I0528 15:28:07.699591   14720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:28:07.726699   14720 command_runner.go:130] > 1882
	I0528 15:28:07.726699   14720 api_server.go:72] duration metric: took 31.5905118s to wait for apiserver process to appear ...
	I0528 15:28:07.726699   14720 api_server.go:88] waiting for apiserver healthz status ...
	I0528 15:28:07.726699   14720 api_server.go:253] Checking apiserver healthz at https://172.18.212.160:8443/healthz ...
	I0528 15:28:07.732815   14720 api_server.go:279] https://172.18.212.160:8443/healthz returned 200:
	ok
	I0528 15:28:07.735554   14720 round_trippers.go:463] GET https://172.18.212.160:8443/version
	I0528 15:28:07.735554   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.735554   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.735554   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.736091   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.737876   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Audit-Id: c2d7d9cc-1d00-4255-98ae-43b6a12849f4
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.737876   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.737876   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.737876   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.737988   14720 round_trippers.go:580]     Content-Length: 263
	I0528 15:28:07.737988   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.738029   14720 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0528 15:28:07.738029   14720 api_server.go:141] control plane version: v1.30.1
	I0528 15:28:07.738029   14720 api_server.go:131] duration metric: took 11.3306ms to wait for apiserver health ...
	I0528 15:28:07.738029   14720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 15:28:07.896078   14720 request.go:629] Waited for 157.8905ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:07.896307   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:07.896377   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:07.896377   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:07.896377   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:07.897093   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:07.902717   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:07.902717   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:07.902717   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:07 GMT
	I0528 15:28:07.902717   14720 round_trippers.go:580]     Audit-Id: 0f563ce7-986d-4688-a008-0aa977b89955
	I0528 15:28:07.904475   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86561 chars]
	I0528 15:28:07.908310   14720 system_pods.go:59] 12 kube-system pods found
	I0528 15:28:07.908310   14720 system_pods.go:61] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:28:07.908310   14720 system_pods.go:61] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:28:07.908310   14720 system_pods.go:74] duration metric: took 170.2799ms to wait for pod list to return data ...
	I0528 15:28:07.908310   14720 default_sa.go:34] waiting for default service account to be created ...
	I0528 15:28:08.096890   14720 request.go:629] Waited for 187.6933ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:28:08.096890   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/default/serviceaccounts
	I0528 15:28:08.096890   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.096890   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.096890   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.097678   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.101167   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Content-Length: 262
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Audit-Id: 427cddd0-59d9-4555-a3f2-43b6d339ccb8
	I0528 15:28:08.101167   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.101262   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.101262   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.101262   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.101262   14720 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1a497e7b-5280-4f3b-b9dc-265476c8a394","resourceVersion":"364","creationTimestamp":"2024-05-28T22:04:31Z"}}]}
	I0528 15:28:08.101762   14720 default_sa.go:45] found service account: "default"
	I0528 15:28:08.101762   14720 default_sa.go:55] duration metric: took 193.4518ms for default service account to be created ...
	I0528 15:28:08.101916   14720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 15:28:08.290437   14720 request.go:629] Waited for 188.1438ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:08.290616   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/namespaces/kube-system/pods
	I0528 15:28:08.290616   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.290616   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.290616   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.291351   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.291351   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.296883   14720 round_trippers.go:580]     Audit-Id: f903ad3b-c5ff-4e7b-8ed3-aa74c240c124
	I0528 15:28:08.296953   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.296969   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.296969   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.297056   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.297056   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.298604   14720 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-p9vb9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d1d5d377-99a6-445e-9597-446b3057e80f","resourceVersion":"1954","creationTimestamp":"2024-05-28T22:04:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d11a1b0d-8687-41d7-b841-24979449141a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-28T22:04:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d11a1b0d-8687-41d7-b841-24979449141a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86561 chars]
	I0528 15:28:08.303231   14720 system_pods.go:86] 12 kube-system pods found
	I0528 15:28:08.303231   14720 system_pods.go:89] "coredns-7db6d8ff4d-p9vb9" [d1d5d377-99a6-445e-9597-446b3057e80f] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "etcd-multinode-517300" [8e99dbbd-d8ae-4164-8087-9d0d43ebec00] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-jf49h" [07acbd95-ca5b-4a2f-aab4-430f47fcf186] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-rmwrj" [df5913a4-73bb-4d14-8b59-a59d2b449bbe] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kindnet-smtdx" [e50433af-d238-4e3c-af31-4b248e50eb9d] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-apiserver-multinode-517300" [ea194cef-cdf6-4ed7-b555-b57cc085b75b] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-controller-manager-multinode-517300" [8de5dc69-a626-438c-8f8e-1542254825fb] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-ckbkv" [13ccca53-976a-4839-be94-d3960388ebfa] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-hbbq2" [3d8065a2-e1bb-4cf6-837a-f63ed0718835] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-proxy-jfckx" [879a98da-431d-4c4d-878a-1c5d63948d55] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "kube-scheduler-multinode-517300" [2dc1e4d9-ef1b-4f83-8762-569efcb3f4c6] Running
	I0528 15:28:08.303231   14720 system_pods.go:89] "storage-provisioner" [ab978e2d-322a-4207-9539-d2d92a25e566] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 15:28:08.303231   14720 system_pods.go:126] duration metric: took 201.3144ms to wait for k8s-apps to be running ...
	I0528 15:28:08.303231   14720 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 15:28:08.315547   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:28:08.340531   14720 system_svc.go:56] duration metric: took 37.3002ms WaitForService to wait for kubelet
	I0528 15:28:08.340531   14720 kubeadm.go:576] duration metric: took 32.2043422s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 15:28:08.340531   14720 node_conditions.go:102] verifying NodePressure condition ...
	I0528 15:28:08.482142   14720 request.go:629] Waited for 141.6102ms due to client-side throttling, not priority and fairness, request: GET:https://172.18.212.160:8443/api/v1/nodes
	I0528 15:28:08.482449   14720 round_trippers.go:463] GET https://172.18.212.160:8443/api/v1/nodes
	I0528 15:28:08.482449   14720 round_trippers.go:469] Request Headers:
	I0528 15:28:08.482449   14720 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0528 15:28:08.482449   14720 round_trippers.go:473]     Accept: application/json, */*
	I0528 15:28:08.490687   14720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 15:28:08.490687   14720 round_trippers.go:577] Response Headers:
	I0528 15:28:08.490687   14720 round_trippers.go:580]     Audit-Id: e5dad35e-a386-4963-8368-b04f7e98ea05
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Cache-Control: no-cache, private
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Content-Type: application/json
	I0528 15:28:08.490754   14720 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 4c739af1-6f2a-4a9f-ba3e-68ff3c23763f
	I0528 15:28:08.490754   14720 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 69c8523a-8b54-4f3f-b853-daf50ded459a
	I0528 15:28:08.490754   14720 round_trippers.go:580]     Date: Tue, 28 May 2024 22:28:08 GMT
	I0528 15:28:08.490928   14720 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1958"},"items":[{"metadata":{"name":"multinode-517300","uid":"52b801e3-d3a7-4eb7-aeed-229230209ac7","resourceVersion":"1935","creationTimestamp":"2024-05-28T22:04:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-517300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c95b4fdda455689199e2a93674568b261e34dc82","minikube.k8s.io/name":"multinode-517300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_28T15_04_19_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15502 chars]
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 15:28:08.491949   14720 node_conditions.go:123] node cpu capacity is 2
	I0528 15:28:08.491949   14720 node_conditions.go:105] duration metric: took 151.418ms to run NodePressure ...
	I0528 15:28:08.491949   14720 start.go:240] waiting for startup goroutines ...
	I0528 15:28:08.491949   14720 start.go:245] waiting for cluster config update ...
	I0528 15:28:08.491949   14720 start.go:254] writing updated cluster config ...
	I0528 15:28:08.497636   14720 out.go:177] 
	I0528 15:28:08.512223   14720 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:28:08.512782   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:28:08.512984   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:08.520270   14720 out.go:177] * Starting "multinode-517300-m02" worker node in "multinode-517300" cluster
	I0528 15:28:08.526179   14720 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 15:28:08.526254   14720 cache.go:56] Caching tarball of preloaded images
	I0528 15:28:08.526752   14720 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 15:28:08.527019   14720 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 15:28:08.527245   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:08.529569   14720 start.go:360] acquireMachinesLock for multinode-517300-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 15:28:08.529569   14720 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-517300-m02"
	I0528 15:28:08.530197   14720 start.go:96] Skipping create...Using existing machine configuration
	I0528 15:28:08.530197   14720 fix.go:54] fixHost starting: m02
	I0528 15:28:08.530263   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:10.601004   14720 main.go:141] libmachine: [stdout =====>] : Off
	
	I0528 15:28:10.601004   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:10.601004   14720 fix.go:112] recreateIfNeeded on multinode-517300-m02: state=Stopped err=<nil>
	W0528 15:28:10.601004   14720 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 15:28:10.606025   14720 out.go:177] * Restarting existing hyperv VM for "multinode-517300-m02" ...
	I0528 15:28:10.608371   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-517300-m02
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:13.653109   14720 main.go:141] libmachine: Waiting for host to start...
	I0528 15:28:13.653109   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:15.896343   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:15.896570   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:15.896570   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:18.408759   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:18.417930   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:19.432997   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:21.635399   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:24.187148   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:24.187148   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:25.201546   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:27.387414   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:27.387457   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:27.387532   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:29.915881   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:29.921433   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:30.935225   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:33.161804   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:33.161804   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:33.168045   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:35.685374   14720 main.go:141] libmachine: [stdout =====>] : 
	I0528 15:28:35.685374   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:36.691368   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:38.912262   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:38.912262   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:38.912546   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:41.417385   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:41.417385   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:41.430496   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:43.579523   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:46.070748   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:46.081958   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:46.082236   14720 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-517300\config.json ...
	I0528 15:28:46.084379   14720 machine.go:94] provisionDockerMachine start ...
	I0528 15:28:46.084379   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:48.201436   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:48.212279   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:48.212279   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:50.689635   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:50.689635   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:50.707276   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:28:50.708366   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:28:50.708366   14720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 15:28:50.844070   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 15:28:50.844070   14720 buildroot.go:166] provisioning hostname "multinode-517300-m02"
	I0528 15:28:50.844070   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:52.951176   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:52.951176   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:52.962005   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:28:55.479679   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:28:55.479679   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:55.497494   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:28:55.498052   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:28:55.498052   14720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-517300-m02 && echo "multinode-517300-m02" | sudo tee /etc/hostname
	I0528 15:28:55.658860   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-517300-m02
	
	I0528 15:28:55.658860   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:28:57.728544   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:28:57.728544   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:28:57.740730   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:00.284213   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:00.295451   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:00.301384   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:00.301384   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:00.301964   14720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-517300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-517300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-517300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 15:29:00.455898   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 15:29:00.455898   14720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 15:29:00.455898   14720 buildroot.go:174] setting up certificates
	I0528 15:29:00.455898   14720 provision.go:84] configureAuth start
	I0528 15:29:00.455898   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:02.528540   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:02.528540   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:02.539421   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:05.104018   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:05.115205   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:05.115205   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:07.189111   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:07.201160   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:07.201160   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:09.669469   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:09.680070   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:09.680070   14720 provision.go:143] copyHostCerts
	I0528 15:29:09.680190   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0528 15:29:09.680190   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 15:29:09.680190   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 15:29:09.680869   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 15:29:09.682037   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0528 15:29:09.682158   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 15:29:09.682158   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 15:29:09.682788   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 15:29:09.683766   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0528 15:29:09.683831   14720 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 15:29:09.683831   14720 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 15:29:09.684364   14720 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 15:29:09.685765   14720 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-517300-m02 san=[127.0.0.1 172.18.212.184 localhost minikube multinode-517300-m02]
	I0528 15:29:09.781810   14720 provision.go:177] copyRemoteCerts
	I0528 15:29:09.795340   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 15:29:09.795433   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:11.888132   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:11.898848   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:11.899047   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:14.388985   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:14.388985   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:14.399699   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:14.508090   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7126169s)
	I0528 15:29:14.508090   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0528 15:29:14.508090   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 15:29:14.559204   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0528 15:29:14.559371   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0528 15:29:14.603083   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0528 15:29:14.603530   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 15:29:14.649140   14720 provision.go:87] duration metric: took 14.193197s to configureAuth
	I0528 15:29:14.649215   14720 buildroot.go:189] setting minikube options for container-runtime
	I0528 15:29:14.649447   14720 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:29:14.649447   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:16.780294   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:16.786953   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:16.786953   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:19.285551   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:19.285551   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:19.301355   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:19.302118   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:19.302118   14720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 15:29:19.436255   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 15:29:19.436255   14720 buildroot.go:70] root file system type: tmpfs
	I0528 15:29:19.436784   14720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 15:29:19.436931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:21.543859   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:21.555238   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:21.555238   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:24.106892   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:24.106892   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:24.113894   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:24.114646   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:24.114646   14720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.18.212.160"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 15:29:24.280026   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.18.212.160
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 15:29:24.280026   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:26.352349   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:26.363353   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:26.363353   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:28.879560   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:28.879715   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:28.885356   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:28.885988   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:28.885988   14720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 15:29:31.207338   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0528 15:29:31.207892   14720 machine.go:97] duration metric: took 45.1233681s to provisionDockerMachine
	I0528 15:29:31.207892   14720 start.go:293] postStartSetup for "multinode-517300-m02" (driver="hyperv")
	I0528 15:29:31.207892   14720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 15:29:31.219883   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 15:29:31.219883   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:33.264306   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:33.275067   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:33.275067   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:35.762354   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:35.762354   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:35.773602   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:35.883275   14720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6633773s)
	I0528 15:29:35.898387   14720 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 15:29:35.904482   14720 command_runner.go:130] > NAME=Buildroot
	I0528 15:29:35.904741   14720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 15:29:35.904741   14720 command_runner.go:130] > ID=buildroot
	I0528 15:29:35.904741   14720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 15:29:35.904741   14720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 15:29:35.904741   14720 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 15:29:35.904741   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0528 15:29:35.904741   14720 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0528 15:29:35.906099   14720 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> 17802.pem in /etc/ssl/certs
	I0528 15:29:35.906099   14720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem -> /etc/ssl/certs/17802.pem
	I0528 15:29:35.917896   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 15:29:35.935907   14720 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\17802.pem --> /etc/ssl/certs/17802.pem (1708 bytes)
	I0528 15:29:35.992912   14720 start.go:296] duration metric: took 4.7850047s for postStartSetup
	I0528 15:29:35.992912   14720 fix.go:56] duration metric: took 1m27.4624344s for fixHost
	I0528 15:29:35.992912   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:38.096190   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:38.096190   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:38.107030   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:40.613931   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:40.624338   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:40.629385   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:40.630312   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:40.630312   14720 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 15:29:40.765152   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716935380.765610513
	
	I0528 15:29:40.765152   14720 fix.go:216] guest clock: 1716935380.765610513
	I0528 15:29:40.765152   14720 fix.go:229] Guest: 2024-05-28 15:29:40.765610513 -0700 PDT Remote: 2024-05-28 15:29:35.9929122 -0700 PDT m=+251.282553201 (delta=4.772698313s)
	I0528 15:29:40.765152   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:42.835864   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:42.846861   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:42.846988   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:45.315454   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:45.315527   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:45.322127   14720 main.go:141] libmachine: Using SSH client type: native
	I0528 15:29:45.322886   14720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.212.184 22 <nil> <nil>}
	I0528 15:29:45.322886   14720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1716935380
	I0528 15:29:45.472129   14720 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May 28 22:29:40 UTC 2024
	
	I0528 15:29:45.472129   14720 fix.go:236] clock set: Tue May 28 22:29:40 UTC 2024
	 (err=<nil>)
	I0528 15:29:45.472129   14720 start.go:83] releasing machines lock for "multinode-517300-m02", held for 1m36.9422501s
	I0528 15:29:45.472129   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:47.553011   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:47.553011   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:47.564528   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:50.028657   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:50.040719   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:50.041999   14720 out.go:177] * Found network options:
	I0528 15:29:50.046526   14720 out.go:177]   - NO_PROXY=172.18.212.160
	W0528 15:29:50.050048   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:29:50.052412   14720 out.go:177]   - NO_PROXY=172.18.212.160
	W0528 15:29:50.054558   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 15:29:50.056218   14720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 15:29:50.060034   14720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 15:29:50.060101   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:50.069772   14720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 15:29:50.069772   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:29:52.204212   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:52.218842   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:52.218931   14720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:29:54.889707   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:54.890037   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:54.890109   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:54.912631   14720 main.go:141] libmachine: [stdout =====>] : 172.18.212.184
	
	I0528 15:29:54.912631   14720 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:29:54.912631   14720 sshutil.go:53] new ssh client: &{IP:172.18.212.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:29:54.973580   14720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0528 15:29:54.979199   14720 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9093185s)
	W0528 15:29:54.979281   14720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 15:29:54.989848   14720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 15:29:55.113374   14720 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 15:29:55.114177   14720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0528 15:29:55.114177   14720 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0541271s)
	I0528 15:29:55.114375   14720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 15:29:55.114576   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:29:55.114774   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:29:55.152758   14720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0528 15:29:55.164721   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 15:29:55.196897   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 15:29:55.213694   14720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 15:29:55.225645   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 15:29:55.253575   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:29:55.285652   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 15:29:55.317283   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 15:29:55.347726   14720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 15:29:55.378344   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 15:29:55.410526   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 15:29:55.440490   14720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 15:29:55.472958   14720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 15:29:55.490085   14720 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 15:29:55.502802   14720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 15:29:55.532937   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:29:55.710667   14720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 15:29:55.739870   14720 start.go:494] detecting cgroup driver to use...
	I0528 15:29:55.752929   14720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 15:29:55.777995   14720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0528 15:29:55.777995   14720 command_runner.go:130] > [Unit]
	I0528 15:29:55.777995   14720 command_runner.go:130] > Description=Docker Application Container Engine
	I0528 15:29:55.777995   14720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0528 15:29:55.777995   14720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0528 15:29:55.777995   14720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0528 15:29:55.778144   14720 command_runner.go:130] > StartLimitBurst=3
	I0528 15:29:55.778144   14720 command_runner.go:130] > StartLimitIntervalSec=60
	I0528 15:29:55.778144   14720 command_runner.go:130] > [Service]
	I0528 15:29:55.778144   14720 command_runner.go:130] > Type=notify
	I0528 15:29:55.778270   14720 command_runner.go:130] > Restart=on-failure
	I0528 15:29:55.778270   14720 command_runner.go:130] > Environment=NO_PROXY=172.18.212.160
	I0528 15:29:55.778270   14720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0528 15:29:55.778270   14720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0528 15:29:55.778362   14720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0528 15:29:55.778393   14720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0528 15:29:55.778449   14720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0528 15:29:55.778449   14720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0528 15:29:55.778449   14720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0528 15:29:55.778554   14720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0528 15:29:55.778554   14720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0528 15:29:55.778554   14720 command_runner.go:130] > ExecStart=
	I0528 15:29:55.778612   14720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0528 15:29:55.778612   14720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0528 15:29:55.778674   14720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0528 15:29:55.778674   14720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitNOFILE=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitNPROC=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > LimitCORE=infinity
	I0528 15:29:55.778746   14720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0528 15:29:55.778797   14720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0528 15:29:55.778797   14720 command_runner.go:130] > TasksMax=infinity
	I0528 15:29:55.778797   14720 command_runner.go:130] > TimeoutStartSec=0
	I0528 15:29:55.778851   14720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0528 15:29:55.778851   14720 command_runner.go:130] > Delegate=yes
	I0528 15:29:55.778851   14720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0528 15:29:55.778919   14720 command_runner.go:130] > KillMode=process
	I0528 15:29:55.778919   14720 command_runner.go:130] > [Install]
	I0528 15:29:55.779046   14720 command_runner.go:130] > WantedBy=multi-user.target
	I0528 15:29:55.793918   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:29:55.823952   14720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 15:29:55.871672   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 15:29:55.905969   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:29:55.940962   14720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0528 15:29:56.011027   14720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 15:29:56.036460   14720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 15:29:56.076643   14720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0528 15:29:56.089015   14720 ssh_runner.go:195] Run: which cri-dockerd
	I0528 15:29:56.098811   14720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0528 15:29:56.112230   14720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 15:29:56.129836   14720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 15:29:56.173660   14720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 15:29:56.368200   14720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 15:29:56.556789   14720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 15:29:56.557056   14720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 15:29:56.600565   14720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 15:29:56.785552   14720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 15:30:57.912336   14720 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0528 15:30:57.920389   14720 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0528 15:30:57.920848   14720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1351003s)
	I0528 15:30:57.934941   14720 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.500677857Z" level=info msg="Starting up"
	I0528 15:30:57.957826   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.501766571Z" level=info msg="containerd not running, starting managed containerd"
	I0528 15:30:57.957928   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.502898485Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	I0528 15:30:57.957928   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.537329523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0528 15:30:57.957973   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563154951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0528 15:30:57.958068   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563311253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0528 15:30:57.958068   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563406754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0528 15:30:57.958156   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563442154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958181   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564365066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958181   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564494468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958254   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564793272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958254   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564927573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564949473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564962374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958332   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.565971886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958407   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.566833297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958493   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570415643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958493   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570534644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0528 15:30:57.958563   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570756147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0528 15:30:57.958563   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570857449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571407656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571547257Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0528 15:30:57.958633   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571597158Z" level=info msg="metadata content store policy set" policy=shared
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581166780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581309981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0528 15:30:57.958708   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581335582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581351182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581365282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0528 15:30:57.958778   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581434283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581870988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582042691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0528 15:30:57.958847   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582159792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582181092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582197993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958918   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582212693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958987   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582227093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.958987   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582242193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959055   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582258493Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959055   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582273294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959131   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582286894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959131   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582301494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0528 15:30:57.959183   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582345495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959230   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582360795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959252   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582383395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959252   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582402195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959312   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582416195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959335   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582429996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582442996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582456096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582504297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582520897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582538397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582560197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582608898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582626498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582646498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582659398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582704899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583162805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583366707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583523709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583669411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583811613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583964415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.584148417Z" level=info msg="NRI interface is disabled by configuration."
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586815751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586940653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587201956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587242357Z" level=info msg="containerd successfully booted in 0.053418s"
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.554747831Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.640291350Z" level=info msg="Loading containers: start."
	I0528 15:30:57.959363   14720 command_runner.go:130] > May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.974269271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.063613046Z" level=info msg="Loading containers: done."
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.102681859Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.103541310Z" level=info msg="Daemon has completed initialization"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 systemd[1]: Started Docker Application Container Engine.
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209689346Z" level=info msg="API listen on /var/run/docker.sock"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209883867Z" level=info msg="API listen on [::]:2376"
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0528 15:30:57.959937   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.813831886Z" level=info msg="Processing signal 'terminated'"
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816073902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816890571Z" level=info msg="Daemon shutdown complete"
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817202059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0528 15:30:57.960073   14720 command_runner.go:130] > May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817269957Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0528 15:30:57.960195   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: docker.service: Deactivated successfully.
	I0528 15:30:57.960195   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0528 15:30:57.960259   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	I0528 15:30:57.960259   14720 command_runner.go:130] > May 28 22:29:57 multinode-517300-m02 dockerd[1037]: time="2024-05-28T22:29:57.888943493Z" level=info msg="Starting up"
	I0528 15:30:57.960314   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 dockerd[1037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0528 15:30:57.960362   14720 command_runner.go:130] > May 28 22:30:57 multinode-517300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0528 15:30:57.968429   14720 out.go:177] 
	W0528 15:30:57.969898   14720 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 28 22:29:29 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.500677857Z" level=info msg="Starting up"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.501766571Z" level=info msg="containerd not running, starting managed containerd"
	May 28 22:29:29 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:29.502898485Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=661
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.537329523Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563154951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563311253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563406754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.563442154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564365066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564494468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564793272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564927573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564949473Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.564962374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.565971886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.566833297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570415643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570534644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570756147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.570857449Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571407656Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571547257Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.571597158Z" level=info msg="metadata content store policy set" policy=shared
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581166780Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581309981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581335582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581351182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581365282Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581434283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.581870988Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582042691Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582159792Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582181092Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582197993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582212693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582227093Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582242193Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582258493Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582273294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582286894Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582301494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582345495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582360795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582383395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582402195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582416195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582429996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582442996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582456096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582504297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582520897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582538397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582560197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582608898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582626498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582646498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582659398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.582704899Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583162805Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583366707Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583523709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583669411Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583811613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.583964415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.584148417Z" level=info msg="NRI interface is disabled by configuration."
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586815751Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.586940653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587201956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 28 22:29:29 multinode-517300-m02 dockerd[661]: time="2024-05-28T22:29:29.587242357Z" level=info msg="containerd successfully booted in 0.053418s"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.554747831Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.640291350Z" level=info msg="Loading containers: start."
	May 28 22:29:30 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:30.974269271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.063613046Z" level=info msg="Loading containers: done."
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.102681859Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.103541310Z" level=info msg="Daemon has completed initialization"
	May 28 22:29:31 multinode-517300-m02 systemd[1]: Started Docker Application Container Engine.
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209689346Z" level=info msg="API listen on /var/run/docker.sock"
	May 28 22:29:31 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:31.209883867Z" level=info msg="API listen on [::]:2376"
	May 28 22:29:56 multinode-517300-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.813831886Z" level=info msg="Processing signal 'terminated'"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816073902Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.816890571Z" level=info msg="Daemon shutdown complete"
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817202059Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 28 22:29:56 multinode-517300-m02 dockerd[655]: time="2024-05-28T22:29:56.817269957Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 28 22:29:57 multinode-517300-m02 systemd[1]: docker.service: Deactivated successfully.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 28 22:29:57 multinode-517300-m02 systemd[1]: Starting Docker Application Container Engine...
	May 28 22:29:57 multinode-517300-m02 dockerd[1037]: time="2024-05-28T22:29:57.888943493Z" level=info msg="Starting up"
	May 28 22:30:57 multinode-517300-m02 dockerd[1037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 28 22:30:57 multinode-517300-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 28 22:30:57 multinode-517300-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0528 15:30:57.971564   14720 out.go:239] * 
	W0528 15:30:57.973056   14720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 15:30:57.974164   14720 out.go:177] 
	
	
	==> Docker <==
	May 28 22:28:03 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:03.555876847Z" level=info msg="shim disconnected" id=6778d40c66ab6312fd1b5932e51db7ab8ad42c8446d82880be172288691db2c7 namespace=moby
	May 28 22:28:03 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:03.556551324Z" level=warning msg="cleaning up after shim disconnected" id=6778d40c66ab6312fd1b5932e51db7ab8ad42c8446d82880be172288691db2c7 namespace=moby
	May 28 22:28:03 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:03.556587218Z" level=info msg="cleaning up dead shim" namespace=moby
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.017365292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.017968389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.018262239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.023486648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 cri-dockerd[1283]: time="2024-05-28T22:28:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a59db2ea5273a5cd0337a29d13c18803852808b97438b28e86b3a2baa274445f/resolv.conf as [nameserver 172.18.208.1]"
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.310012672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.310286525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.310317120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.310728450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.496914320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.498468551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.499918800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.500633176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 cri-dockerd[1283]: time="2024-05-28T22:28:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0df118d912182e8cd553cecd5daa6aedc671f14dfd73df9071043dab6b686f37/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.815225949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.815402619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.815418816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:04 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:04.816365052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:19 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:19.199067852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 22:28:19 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:19.199446442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 22:28:19 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:19.199462341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 22:28:19 multinode-517300 dockerd[1065]: time="2024-05-28T22:28:19.199551039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4bcd6714dce6       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   975423b0954b6       storage-provisioner
	607f511ba0b4a       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   0df118d912182       busybox-fc5497c4f-r5pc4
	cb88e20f8cee8       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   a59db2ea5273a       coredns-7db6d8ff4d-p9vb9
	c22c95a992b14       ac1c61439df46                                                                                         3 minutes ago       Running             kindnet-cni               1                   77ecbfb9d069c       kindnet-rmwrj
	6778d40c66ab6       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   975423b0954b6       storage-provisioner
	b707834f01c0c       747097150317f                                                                                         3 minutes ago       Running             kube-proxy                1                   b6ede4b844bc8       kube-proxy-hbbq2
	ba63faf1af5f8       91be940803172                                                                                         3 minutes ago       Running             kube-apiserver            0                   1f23331070bad       kube-apiserver-multinode-517300
	c530758262325       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   75743ac8c58da       etcd-multinode-517300
	22082944cbf47       25a1387cdab82                                                                                         3 minutes ago       Running             kube-controller-manager   1                   ae12872ff747c       kube-controller-manager-multinode-517300
	53ddf401de9e5       a52dc94f0a912                                                                                         3 minutes ago       Running             kube-scheduler            1                   0486b413a08c2       kube-scheduler-multinode-517300
	bf13b101a11ff       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   120fc411395d2       busybox-fc5497c4f-r5pc4
	46e6bf352b791       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   666496aef7c50       coredns-7db6d8ff4d-p9vb9
	3799d49aa0d84       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Exited              kindnet-cni               0                   3f800c86d1d40       kindnet-rmwrj
	ec6e090d8a58e       747097150317f                                                                                         26 minutes ago      Exited              kube-proxy                0                   d1b2ab67f8e32       kube-proxy-hbbq2
	1b70e592d21ff       25a1387cdab82                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   453e3ca7e2c8f       kube-controller-manager-multinode-517300
	d79040b216c8f       a52dc94f0a912                                                                                         27 minutes ago      Exited              kube-scheduler            0                   de5c7dce36831       kube-scheduler-multinode-517300
	
	
	==> coredns [46e6bf352b79] <==
	[INFO] 10.244.0.3:47223 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001462s
	[INFO] 10.244.0.3:50815 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001192s
	[INFO] 10.244.0.3:46626 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071101s
	[INFO] 10.244.0.3:42764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001483s
	[INFO] 10.244.0.3:51566 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0004042s
	[INFO] 10.244.0.3:58939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002092s
	[INFO] 10.244.0.3:41939 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000621s
	[INFO] 10.244.1.2:57605 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165901s
	[INFO] 10.244.1.2:44917 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000843s
	[INFO] 10.244.1.2:33167 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000577s
	[INFO] 10.244.1.2:51141 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000829s
	[INFO] 10.244.0.3:58939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002064s
	[INFO] 10.244.0.3:58515 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000665s
	[INFO] 10.244.0.3:59330 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000432901s
	[INFO] 10.244.0.3:60269 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000599s
	[INFO] 10.244.1.2:33516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001233s
	[INFO] 10.244.1.2:42500 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000374701s
	[INFO] 10.244.1.2:51170 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001075s
	[INFO] 10.244.1.2:42605 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000851s
	[INFO] 10.244.0.3:40521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001483s
	[INFO] 10.244.0.3:45749 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000925501s
	[INFO] 10.244.0.3:42750 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223301s
	[INFO] 10.244.0.3:53798 - 5 "PTR IN 1.208.18.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082901s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb88e20f8cee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84fd659ad9ac94c01df8e848594b3a1ced14aceda7cd3c6e3545e3a962a1b1458ec39f5e77d24a8e8c54353ecf39f49081a6595b1a2779ec288b57d4bd17c052
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44103 - 19549 "HINFO IN 5454845007588812104.5509343929849303850. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054064527s
	
	
	==> describe nodes <==
	Name:               multinode-517300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-517300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T15_04_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:04:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517300
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:28:01 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:28:01 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:28:01 +0000   Tue, 28 May 2024 22:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:28:01 +0000   Tue, 28 May 2024 22:28:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.18.212.160
	  Hostname:    multinode-517300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cd4de2fa72b4b3c91b312f05b522b78
	  System UUID:                73fc560d-17a3-fb45-ad7a-d3ae5b4eb5c5
	  Boot ID:                    c4f39419-90ab-48d5-8a53-a6dd097819e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-r5pc4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-p9vb9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-517300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-rmwrj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-517300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-multinode-517300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-hbbq2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-517300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-517300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-517300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-517300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-517300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-517300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-517300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-517300 event: Registered Node multinode-517300 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-517300 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node multinode-517300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node multinode-517300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node multinode-517300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-517300 event: Registered Node multinode-517300 in Controller
	
	
	Name:               multinode-517300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-517300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T15_07_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:07:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:24:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 22:23:45 +0000   Tue, 28 May 2024 22:28:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 22:23:45 +0000   Tue, 28 May 2024 22:28:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 22:23:45 +0000   Tue, 28 May 2024 22:28:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 22:23:45 +0000   Tue, 28 May 2024 22:28:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.223.89
	  Hostname:    multinode-517300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7131ddbbf859438a85bb64ac53c68e52
	  System UUID:                1e55cbce-9555-f041-9232-dd950f326fbc
	  Boot ID:                    f4f14f60-2d99-4b08-a756-91943b25705e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7bglt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-jf49h              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-ckbkv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-517300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-517300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-517300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node multinode-517300-m02 event: Registered Node multinode-517300-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-517300-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m36s              node-controller  Node multinode-517300-m02 event: Registered Node multinode-517300-m02 in Controller
	  Normal  NodeNotReady             2m56s              node-controller  Node multinode-517300-m02 status is now: NodeNotReady
	
	
	Name:               multinode-517300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-517300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-517300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T15_23_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:23:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-517300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:24:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 22:23:11 +0000   Tue, 28 May 2024 22:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 22:23:11 +0000   Tue, 28 May 2024 22:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 22:23:11 +0000   Tue, 28 May 2024 22:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 22:23:11 +0000   Tue, 28 May 2024 22:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.18.218.70
	  Hostname:    multinode-517300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2cd998c12ad4ce88a19832c00a52c8e
	  System UUID:                ef05a2d3-2fe2-0549-af40-04bab83e2bb6
	  Boot ID:                    c8e09a18-8a95-4449-9418-8933d24f0d5e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-smtdx       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-jfckx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 8m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-517300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-517300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-517300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-517300-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m16s (x2 over 8m16s)  kubelet          Node multinode-517300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s (x2 over 8m16s)  kubelet          Node multinode-517300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s (x2 over 8m16s)  kubelet          Node multinode-517300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m13s                  node-controller  Node multinode-517300-m03 event: Registered Node multinode-517300-m03 in Controller
	  Normal  NodeReady                8m8s                   kubelet          Node multinode-517300-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m33s                  node-controller  Node multinode-517300-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-517300-m03 event: Registered Node multinode-517300-m03 in Controller
	
	
	==> dmesg <==
	[  +1.245763] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.038432] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.631065] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 22:26] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.096637] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.065352] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[May28 22:27] systemd-fstab-generator[984]: Ignoring "noauto" option for root device
	[  +0.110186] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.545855] systemd-fstab-generator[1024]: Ignoring "noauto" option for root device
	[  +0.186994] systemd-fstab-generator[1036]: Ignoring "noauto" option for root device
	[  +0.233772] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +2.864995] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.184443] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.204934] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.269124] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.850874] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.091690] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.680395] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +1.403433] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.641105] kauditd_printk_skb: 15 callbacks suppressed
	[  +3.581956] systemd-fstab-generator[2334]: Ignoring "noauto" option for root device
	[  +7.428748] kauditd_printk_skb: 70 callbacks suppressed
	[May28 22:28] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [c53075826232] <==
	{"level":"info","ts":"2024-05-28T22:27:28.011647Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T22:27:28.011658Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T22:27:28.011874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 switched to configuration voters=(5656573738420855588)"}
	{"level":"info","ts":"2024-05-28T22:27:28.012036Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b71e968195641782","local-member-id":"4e802fd86460e324","added-peer-id":"4e802fd86460e324","added-peer-peer-urls":["https://172.18.214.123:2380"]}
	{"level":"info","ts":"2024-05-28T22:27:28.012137Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b71e968195641782","local-member-id":"4e802fd86460e324","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:27:28.013249Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:27:28.018776Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T22:27:28.019695Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4e802fd86460e324","initial-advertise-peer-urls":["https://172.18.212.160:2380"],"listen-peer-urls":["https://172.18.212.160:2380"],"advertise-client-urls":["https://172.18.212.160:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.18.212.160:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T22:27:28.021483Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T22:27:28.021861Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.18.212.160:2380"}
	{"level":"info","ts":"2024-05-28T22:27:28.021943Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.18.212.160:2380"}
	{"level":"info","ts":"2024-05-28T22:27:29.033217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T22:27:29.033284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T22:27:29.033303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 received MsgPreVoteResp from 4e802fd86460e324 at term 2"}
	{"level":"info","ts":"2024-05-28T22:27:29.033316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T22:27:29.033323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 received MsgVoteResp from 4e802fd86460e324 at term 3"}
	{"level":"info","ts":"2024-05-28T22:27:29.033332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4e802fd86460e324 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T22:27:29.03334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4e802fd86460e324 elected leader 4e802fd86460e324 at term 3"}
	{"level":"info","ts":"2024-05-28T22:27:29.040398Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4e802fd86460e324","local-member-attributes":"{Name:multinode-517300 ClientURLs:[https://172.18.212.160:2379]}","request-path":"/0/members/4e802fd86460e324/attributes","cluster-id":"b71e968195641782","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T22:27:29.041232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:27:29.041475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:27:29.045982Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T22:27:29.04619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T22:27:29.062521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T22:27:29.079722Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.18.212.160:2379"}
	
	
	==> kernel <==
	 22:31:19 up 5 min,  0 users,  load average: 0.19, 0.17, 0.08
	Linux multinode-517300 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3799d49aa0d8] <==
	I0528 22:24:22.228316       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:24:32.239787       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:24:32.239836       1 main.go:227] handling current node
	I0528 22:24:32.239851       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:24:32.239858       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:24:32.241784       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:24:32.242297       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:24:42.248545       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:24:42.248640       1 main.go:227] handling current node
	I0528 22:24:42.248683       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:24:42.248695       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:24:42.248829       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:24:42.248841       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:24:52.375637       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:24:52.375764       1 main.go:227] handling current node
	I0528 22:24:52.375780       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:24:52.375787       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:24:52.376346       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:24:52.376366       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:25:02.427761       1 main.go:223] Handling node with IPs: map[172.18.214.123:{}]
	I0528 22:25:02.427859       1 main.go:227] handling current node
	I0528 22:25:02.427873       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:25:02.427880       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:25:02.428017       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:25:02.428029       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c22c95a992b1] <==
	I0528 22:30:29.808918       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:30:39.815714       1 main.go:223] Handling node with IPs: map[172.18.212.160:{}]
	I0528 22:30:39.815861       1 main.go:227] handling current node
	I0528 22:30:39.815876       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:30:39.815883       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:30:39.816015       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:30:39.816045       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:30:49.837931       1 main.go:223] Handling node with IPs: map[172.18.212.160:{}]
	I0528 22:30:49.838043       1 main.go:227] handling current node
	I0528 22:30:49.838057       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:30:49.838064       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:30:49.838545       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:30:49.838576       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:30:59.854060       1 main.go:223] Handling node with IPs: map[172.18.212.160:{}]
	I0528 22:30:59.854196       1 main.go:227] handling current node
	I0528 22:30:59.854214       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:30:59.854223       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:30:59.854345       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:30:59.854377       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	I0528 22:31:09.862475       1 main.go:223] Handling node with IPs: map[172.18.212.160:{}]
	I0528 22:31:09.862518       1 main.go:227] handling current node
	I0528 22:31:09.862531       1 main.go:223] Handling node with IPs: map[172.18.223.89:{}]
	I0528 22:31:09.862537       1 main.go:250] Node multinode-517300-m02 has CIDR [10.244.1.0/24] 
	I0528 22:31:09.862919       1 main.go:223] Handling node with IPs: map[172.18.218.70:{}]
	I0528 22:31:09.863014       1 main.go:250] Node multinode-517300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ba63faf1af5f] <==
	I0528 22:27:31.117595       1 aggregator.go:165] initial CRD sync complete...
	I0528 22:27:31.117622       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 22:27:31.117640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 22:27:31.117646       1 cache.go:39] Caches are synced for autoregister controller
	I0528 22:27:31.163237       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 22:27:31.164384       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 22:27:31.164654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 22:27:31.165241       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 22:27:31.165819       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 22:27:31.166553       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 22:27:31.168780       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 22:27:31.175288       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 22:27:31.197337       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 22:27:31.197994       1 policy_source.go:224] refreshing policies
	I0528 22:27:31.210987       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 22:27:32.000708       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0528 22:27:32.594948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.212.160 172.18.214.123]
	I0528 22:27:32.596423       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 22:27:32.614183       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 22:27:34.075388       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 22:27:34.274752       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 22:27:34.304033       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 22:27:34.446835       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 22:27:34.459291       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0528 22:27:52.594233       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.18.212.160]
	
	
	==> kube-controller-manager [1b70e592d21f] <==
	I0528 22:04:44.668202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="200.5µs"
	I0528 22:04:46.195642       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0528 22:07:27.213938       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-517300-m02\" does not exist"
	I0528 22:07:27.226042       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-517300-m02" podCIDRs=["10.244.1.0/24"]
	I0528 22:07:31.228345       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300-m02"
	I0528 22:07:51.174806       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:08:16.713461       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.59304ms"
	I0528 22:08:16.752067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.939402ms"
	I0528 22:08:16.752155       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.5µs"
	I0528 22:08:19.527410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.489518ms"
	I0528 22:08:19.527571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.1µs"
	I0528 22:08:19.942951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.880513ms"
	I0528 22:08:19.943273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41µs"
	I0528 22:12:11.003115       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:12:11.004146       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-517300-m03\" does not exist"
	I0528 22:12:11.090742       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-517300-m03" podCIDRs=["10.244.2.0/24"]
	I0528 22:12:11.303133       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300-m03"
	I0528 22:12:30.087850       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:20:21.437047       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:22:57.424812       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:23:03.817487       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-517300-m03\" does not exist"
	I0528 22:23:03.819218       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:23:03.846433       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-517300-m03" podCIDRs=["10.244.3.0/24"]
	I0528 22:23:11.338458       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:24:46.629611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	
	
	==> kube-controller-manager [22082944cbf4] <==
	I0528 22:27:43.706713       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0528 22:27:43.720528       1 shared_informer.go:320] Caches are synced for taint
	I0528 22:27:43.722946       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 22:27:43.747894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.866159ms"
	I0528 22:27:43.749600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="281.509µs"
	I0528 22:27:43.754871       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300"
	I0528 22:27:43.755018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300-m02"
	I0528 22:27:43.755130       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-517300-m03"
	I0528 22:27:43.755904       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0528 22:27:43.767626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.293383ms"
	I0528 22:27:43.768289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.161µs"
	I0528 22:27:43.831513       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0528 22:27:43.870150       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 22:27:43.881092       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 22:27:44.290037       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 22:27:44.290312       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 22:27:44.335506       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 22:28:01.904489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-517300-m02"
	I0528 22:28:04.959296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.09µs"
	I0528 22:28:06.033452       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.14168ms"
	I0528 22:28:06.033548       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.898µs"
	I0528 22:28:06.073971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.776027ms"
	I0528 22:28:06.075996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="175.194µs"
	I0528 22:28:23.885242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.68798ms"
	I0528 22:28:23.885329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.499µs"
	
	
	==> kube-proxy [b707834f01c0] <==
	I0528 22:27:33.802514       1 server_linux.go:69] "Using iptables proxy"
	I0528 22:27:33.854011       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.212.160"]
	I0528 22:27:33.972980       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 22:27:33.973141       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 22:27:33.973372       1 server_linux.go:165] "Using iptables Proxier"
	I0528 22:27:33.977014       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 22:27:33.977410       1 server.go:872] "Version info" version="v1.30.1"
	I0528 22:27:33.977443       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:27:33.981945       1 config.go:192] "Starting service config controller"
	I0528 22:27:33.982535       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 22:27:33.982620       1 config.go:101] "Starting endpoint slice config controller"
	I0528 22:27:33.982635       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 22:27:33.985645       1 config.go:319] "Starting node config controller"
	I0528 22:27:33.985679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 22:27:34.083558       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 22:27:34.084220       1 shared_informer.go:320] Caches are synced for service config
	I0528 22:27:34.086065       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ec6e090d8a58] <==
	I0528 22:04:33.388317       1 server_linux.go:69] "Using iptables proxy"
	I0528 22:04:33.409384       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.214.123"]
	I0528 22:04:33.519730       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 22:04:33.519856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 22:04:33.519880       1 server_linux.go:165] "Using iptables Proxier"
	I0528 22:04:33.524815       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 22:04:33.525225       1 server.go:872] "Version info" version="v1.30.1"
	I0528 22:04:33.525314       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:04:33.527247       1 config.go:192] "Starting service config controller"
	I0528 22:04:33.534238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 22:04:33.528207       1 config.go:319] "Starting node config controller"
	I0528 22:04:33.534645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 22:04:33.534886       1 shared_informer.go:320] Caches are synced for node config
	I0528 22:04:33.527637       1 config.go:101] "Starting endpoint slice config controller"
	I0528 22:04:33.535296       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 22:04:33.535487       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 22:04:33.535760       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [53ddf401de9e] <==
	I0528 22:27:28.311472       1 serving.go:380] Generated self-signed cert in-memory
	W0528 22:27:31.042151       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 22:27:31.042558       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:27:31.042900       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 22:27:31.043112       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 22:27:31.115462       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 22:27:31.115536       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:27:31.120904       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 22:27:31.121071       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 22:27:31.121114       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 22:27:31.121231       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 22:27:31.221831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d79040b216c8] <==
	E0528 22:04:15.906594       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:04:15.974569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:15.974920       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:15.978379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 22:04:15.978603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 22:04:15.982465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 22:04:15.982743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 22:04:16.016813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:04:16.018026       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 22:04:16.098521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 22:04:16.098961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 22:04:16.160174       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 22:04:16.160284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 22:04:16.185917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:16.186824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:16.300433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 22:04:16.300744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 22:04:16.463944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 22:04:16.464359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 22:04:16.521892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 22:04:16.521995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 22:04:16.544827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:04:16.545019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0528 22:04:17.846264       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 22:25:03.647235       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 28 22:27:54 multinode-517300 kubelet[1526]: E0528 22:27:54.009670    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-r5pc4" podUID="ea8bd23f-d46d-4856-89b0-b4455180e6b6"
	May 28 22:27:54 multinode-517300 kubelet[1526]: E0528 22:27:54.010852    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-p9vb9" podUID="d1d5d377-99a6-445e-9597-446b3057e80f"
	May 28 22:27:56 multinode-517300 kubelet[1526]: E0528 22:27:56.008329    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-p9vb9" podUID="d1d5d377-99a6-445e-9597-446b3057e80f"
	May 28 22:27:56 multinode-517300 kubelet[1526]: E0528 22:27:56.009315    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-r5pc4" podUID="ea8bd23f-d46d-4856-89b0-b4455180e6b6"
	May 28 22:28:03 multinode-517300 kubelet[1526]: I0528 22:28:03.745959    1526 scope.go:117] "RemoveContainer" containerID="74720800aab45ca450145eb95837e6bb3dafd3be8de003161bc7584490ca0c7c"
	May 28 22:28:03 multinode-517300 kubelet[1526]: I0528 22:28:03.746433    1526 scope.go:117] "RemoveContainer" containerID="6778d40c66ab6312fd1b5932e51db7ab8ad42c8446d82880be172288691db2c7"
	May 28 22:28:03 multinode-517300 kubelet[1526]: E0528 22:28:03.747247    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ab978e2d-322a-4207-9539-d2d92a25e566)\"" pod="kube-system/storage-provisioner" podUID="ab978e2d-322a-4207-9539-d2d92a25e566"
	May 28 22:28:19 multinode-517300 kubelet[1526]: I0528 22:28:19.008052    1526 scope.go:117] "RemoveContainer" containerID="6778d40c66ab6312fd1b5932e51db7ab8ad42c8446d82880be172288691db2c7"
	May 28 22:28:26 multinode-517300 kubelet[1526]: I0528 22:28:26.026358    1526 scope.go:117] "RemoveContainer" containerID="b44c8d48ef931c2a5b8e7677da64d1beb74a5c6ce96df97208e3c592a0f8da1c"
	May 28 22:28:26 multinode-517300 kubelet[1526]: E0528 22:28:26.043852    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:28:26 multinode-517300 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:28:26 multinode-517300 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:28:26 multinode-517300 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:28:26 multinode-517300 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:28:26 multinode-517300 kubelet[1526]: I0528 22:28:26.070915    1526 scope.go:117] "RemoveContainer" containerID="b21e4a41b2feb34fffb10ad7bfe989823ce3d0db5b69a0c0897641900c8f150a"
	May 28 22:29:26 multinode-517300 kubelet[1526]: E0528 22:29:26.039485    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:29:26 multinode-517300 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:29:26 multinode-517300 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:29:26 multinode-517300 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:29:26 multinode-517300 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:30:26 multinode-517300 kubelet[1526]: E0528 22:30:26.048505    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:30:26 multinode-517300 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:30:26 multinode-517300 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:30:26 multinode-517300 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:30:26 multinode-517300 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:31:11.361622   13056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-517300 -n multinode-517300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-517300 -n multinode-517300: (11.7598688s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-517300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (464.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-390500 --driver=hyperv
E0528 15:50:06.453570    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-390500 --driver=hyperv: exit status 1 (4m59.7396969s)

                                                
                                                
-- stdout --
	* [NoKubernetes-390500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-390500" primary control-plane node in "NoKubernetes-390500" cluster

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:48:45.349785    7092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-390500 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-390500 -n NoKubernetes-390500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-390500 -n NoKubernetes-390500: exit status 7 (168.8129ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:53:45.067308    3124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-390500" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (10800.421s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-143500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-143500 --alsologtostderr -v=5: exit status 1 (8.3603202s)

                                                
                                                
-- stdout --
	* Stopping node "pause-143500"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 16:20:44.865703    4220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 16:20:44.874107    4220 out.go:291] Setting OutFile to fd 600 ...
	I0528 16:20:44.875065    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 16:20:44.875065    4220 out.go:304] Setting ErrFile to fd 1664...
	I0528 16:20:44.875207    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 16:20:44.890231    4220 out.go:298] Setting JSON to false
	I0528 16:20:44.900954    4220 cli_runner.go:164] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0528 16:20:45.104857    4220 config.go:182] Loaded profile config "auto-952300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.105384    4220 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.105635    4220 config.go:182] Loaded profile config "kindnet-952300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.105635    4220 config.go:182] Loaded profile config "kubernetes-upgrade-709500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.106724    4220 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.107213    4220 config.go:182] Loaded profile config "pause-143500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.107717    4220 config.go:182] Loaded profile config "pause-143500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.107777    4220 delete.go:301] DeleteProfiles
	I0528 16:20:45.107777    4220 delete.go:329] Deleting pause-143500
	I0528 16:20:45.107777    4220 delete.go:334] pause-143500 configuration: &{Name:pause-143500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:pause-143500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.18.215.196 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-d
evice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 16:20:45.108441    4220 config.go:182] Loaded profile config "pause-143500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.108581    4220 config.go:182] Loaded profile config "pause-143500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:45.111183    4220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-143500 ).state
	I0528 16:20:47.352038    4220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:47.352038    4220 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:47.352038    4220 stop.go:39] StopHost: pause-143500
	I0528 16:20:47.357413    4220 out.go:177] * Stopping node "pause-143500"  ...
	I0528 16:20:47.359373    4220 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 16:20:47.371186    4220 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 16:20:47.371186    4220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-143500 ).state
	I0528 16:20:49.668127    4220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:49.669048    4220 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:49.669162    4220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-143500 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:52.307975    4220 main.go:141] libmachine: [stdout =====>] : 172.18.215.196
	
	I0528 16:20:52.308080    4220 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:52.308346    4220 sshutil.go:53] new ssh client: &{IP:172.18.215.196 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-143500\id_rsa Username:docker}
	I0528 16:20:52.426891    4220 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.0556884s)
	I0528 16:20:52.440008    4220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 16:20:52.527707    4220 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 16:20:52.595090    4220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-143500 ).state

                                                
                                                
** /stderr **
pause_test.go:134: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-143500 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-143500 -n pause-143500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-143500 -n pause-143500: exit status 2 (12.5947277s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 16:20:53.245830    3508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/DeletePaused FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/DeletePaused]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-143500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-143500 logs -n 25: (18.7087952s)
helpers_test.go:252: TestPause/serial/DeletePaused logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | force-systemd-env-693100       | force-systemd-env-693100  | minikube1\jenkins | v1.33.1 | 28 May 24 16:01 PDT | 28 May 24 16:01 PDT |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-693100    | force-systemd-env-693100  | minikube1\jenkins | v1.33.1 | 28 May 24 16:01 PDT | 28 May 24 16:02 PDT |
	| start   | -p kubernetes-upgrade-709500   | kubernetes-upgrade-709500 | minikube1\jenkins | v1.33.1 | 28 May 24 16:02 PDT | 28 May 24 16:09 PDT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | docker-flags-856200 ssh        | docker-flags-856200       | minikube1\jenkins | v1.33.1 | 28 May 24 16:03 PDT | 28 May 24 16:03 PDT |
	|         | sudo systemctl show docker     |                           |                   |         |                     |                     |
	|         | --property=Environment         |                           |                   |         |                     |                     |
	|         | --no-pager                     |                           |                   |         |                     |                     |
	| ssh     | docker-flags-856200 ssh        | docker-flags-856200       | minikube1\jenkins | v1.33.1 | 28 May 24 16:03 PDT | 28 May 24 16:04 PDT |
	|         | sudo systemctl show docker     |                           |                   |         |                     |                     |
	|         | --property=ExecStart           |                           |                   |         |                     |                     |
	|         | --no-pager                     |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-856200         | docker-flags-856200       | minikube1\jenkins | v1.33.1 | 28 May 24 16:04 PDT | 28 May 24 16:04 PDT |
	| start   | -p stopped-upgrade-347000      | minikube                  | minikube1\jenkins | v1.26.0 | 28 May 24 16:04 PDT | 28 May 24 16:12 PDT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv             |                           |                   |         |                     |                     |
	| delete  | -p cert-expiration-025200      | cert-expiration-025200    | minikube1\jenkins | v1.33.1 | 28 May 24 16:05 PDT | 28 May 24 16:05 PDT |
	| start   | -p pause-143500 --memory=2048  | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:05 PDT | 28 May 24 16:14 PDT |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-205300      | running-upgrade-205300    | minikube1\jenkins | v1.33.1 | 28 May 24 16:07 PDT | 28 May 24 16:16 PDT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-709500   | kubernetes-upgrade-709500 | minikube1\jenkins | v1.33.1 | 28 May 24 16:09 PDT | 28 May 24 16:10 PDT |
	| start   | -p kubernetes-upgrade-709500   | kubernetes-upgrade-709500 | minikube1\jenkins | v1.33.1 | 28 May 24 16:10 PDT | 28 May 24 16:17 PDT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-347000 stop    | minikube                  | minikube1\jenkins | v1.26.0 | 28 May 24 16:12 PDT | 28 May 24 16:12 PDT |
	| start   | -p stopped-upgrade-347000      | stopped-upgrade-347000    | minikube1\jenkins | v1.33.1 | 28 May 24 16:12 PDT | 28 May 24 16:19 PDT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-143500                | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:14 PDT | 28 May 24 16:20 PDT |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-205300      | running-upgrade-205300    | minikube1\jenkins | v1.33.1 | 28 May 24 16:16 PDT | 28 May 24 16:17 PDT |
	| start   | -p auto-952300 --memory=3072   | auto-952300               | minikube1\jenkins | v1.33.1 | 28 May 24 16:17 PDT |                     |
	|         | --alsologtostderr --wait=true  |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m             |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-709500   | kubernetes-upgrade-709500 | minikube1\jenkins | v1.33.1 | 28 May 24 16:17 PDT |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-709500   | kubernetes-upgrade-709500 | minikube1\jenkins | v1.33.1 | 28 May 24 16:17 PDT |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-347000      | stopped-upgrade-347000    | minikube1\jenkins | v1.33.1 | 28 May 24 16:19 PDT | 28 May 24 16:20 PDT |
	| pause   | -p pause-143500                | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:20 PDT | 28 May 24 16:20 PDT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| start   | -p kindnet-952300              | kindnet-952300            | minikube1\jenkins | v1.33.1 | 28 May 24 16:20 PDT |                     |
	|         | --memory=3072                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                           |                   |         |                     |                     |
	|         | --wait-timeout=15m             |                           |                   |         |                     |                     |
	|         | --cni=kindnet --driver=hyperv  |                           |                   |         |                     |                     |
	| unpause | -p pause-143500                | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:20 PDT | 28 May 24 16:20 PDT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| pause   | -p pause-143500                | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:20 PDT | 28 May 24 16:20 PDT |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| delete  | -p pause-143500                | pause-143500              | minikube1\jenkins | v1.33.1 | 28 May 24 16:20 PDT |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 16:20:15
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 16:20:15.178833    2912 out.go:291] Setting OutFile to fd 1508 ...
	I0528 16:20:15.178833    2912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 16:20:15.178833    2912 out.go:304] Setting ErrFile to fd 1784...
	I0528 16:20:15.179835    2912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 16:20:15.217113    2912 out.go:298] Setting JSON to false
	I0528 16:20:15.221101    2912 start.go:129] hostinfo: {"hostname":"minikube1","uptime":11179,"bootTime":1716927235,"procs":214,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 16:20:15.221101    2912 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 16:20:15.228104    2912 out.go:177] * [kindnet-952300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 16:20:15.232560    2912 notify.go:220] Checking for updates...
	I0528 16:20:15.235124    2912 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 16:20:15.238130    2912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 16:20:15.241109    2912 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 16:20:15.244135    2912 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 16:20:15.246123    2912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 16:20:11.008232    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:13.400313    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:13.400313    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:13.400442    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:15.250137    2912 config.go:182] Loaded profile config "auto-952300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:15.250137    2912 config.go:182] Loaded profile config "ha-941700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:15.251142    2912 config.go:182] Loaded profile config "kubernetes-upgrade-709500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:15.251142    2912 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:15.252139    2912 config.go:182] Loaded profile config "pause-143500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:15.252139    2912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 16:20:16.292466    6152 main.go:141] libmachine: [stdout =====>] : 
	I0528 16:20:16.292466    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:17.298907    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:20.351456    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:20.351456    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:20.351456    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:21.304111    2912 out.go:177] * Using the hyperv driver based on user configuration
	I0528 16:20:21.310940    2912 start.go:297] selected driver: hyperv
	I0528 16:20:21.310940    2912 start.go:901] validating driver "hyperv" against <nil>
	I0528 16:20:21.310940    2912 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 16:20:21.360105    2912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 16:20:21.360599    2912 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 16:20:21.361615    2912 cni.go:84] Creating CNI manager for "kindnet"
	I0528 16:20:21.361615    2912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 16:20:21.361842    2912 start.go:340] cluster config:
	{Name:kindnet-952300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-952300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 16:20:21.361907    2912 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 16:20:21.368965    2912 out.go:177] * Starting "kindnet-952300" primary control-plane node in "kindnet-952300" cluster
	I0528 16:20:21.371618    2912 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 16:20:21.371618    2912 preload.go:147] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 16:20:21.371618    2912 cache.go:56] Caching tarball of preloaded images
	I0528 16:20:21.372304    2912 preload.go:173] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0528 16:20:21.372304    2912 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 16:20:21.372304    2912 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-952300\config.json ...
	I0528 16:20:21.372304    2912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-952300\config.json: {Name:mk337fdc3ab0fbd90068a58eab92d14554f54064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 16:20:21.373272    2912 start.go:360] acquireMachinesLock for kindnet-952300: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 16:20:23.173728    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:23.173728    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:23.174315    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:25.478997    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:25.478997    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:25.479199    6152 machine.go:94] provisionDockerMachine start ...
	I0528 16:20:25.479310    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:27.718249    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:27.718249    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:27.719217    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:30.396230    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:30.396230    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:30.401950    6152 main.go:141] libmachine: Using SSH client type: native
	I0528 16:20:30.402702    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.208.144 22 <nil> <nil>}
	I0528 16:20:30.402702    6152 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 16:20:30.534760    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 16:20:30.534760    6152 buildroot.go:166] provisioning hostname "auto-952300"
	I0528 16:20:30.534760    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:32.823377    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:32.823377    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:32.823696    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:35.516645    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:35.516645    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:35.523769    6152 main.go:141] libmachine: Using SSH client type: native
	I0528 16:20:35.524439    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.208.144 22 <nil> <nil>}
	I0528 16:20:35.524439    6152 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-952300 && echo "auto-952300" | sudo tee /etc/hostname
	I0528 16:20:35.693938    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-952300
	
	I0528 16:20:35.694163    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:38.013881    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:38.013968    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:38.014098    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:40.727687    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:40.727687    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:40.733927    6152 main.go:141] libmachine: Using SSH client type: native
	I0528 16:20:40.734044    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.208.144 22 <nil> <nil>}
	I0528 16:20:40.734044    6152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-952300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-952300/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-952300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 16:20:40.884017    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 16:20:40.884017    6152 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0528 16:20:40.884017    6152 buildroot.go:174] setting up certificates
	I0528 16:20:40.884017    6152 provision.go:84] configureAuth start
	I0528 16:20:40.885024    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:43.150249    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:43.150502    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:43.150502    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:45.927473    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:45.927473    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:45.927858    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:48.206307    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:48.206897    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:48.206990    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:50.857721    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:50.857721    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:50.857721    6152 provision.go:143] copyHostCerts
	I0528 16:20:50.858586    6152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0528 16:20:50.858586    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0528 16:20:50.859038    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0528 16:20:50.860443    6152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0528 16:20:50.860443    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0528 16:20:50.860813    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0528 16:20:50.862027    6152 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0528 16:20:50.862143    6152 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0528 16:20:50.862535    6152 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1675 bytes)
	I0528 16:20:50.863393    6152 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-952300 san=[127.0.0.1 172.18.208.144 auto-952300 localhost minikube]
	I0528 16:20:51.232992    6152 provision.go:177] copyRemoteCerts
	I0528 16:20:51.244536    6152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 16:20:51.244536    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:53.508856    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:53.508955    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:53.509028    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:20:56.277963    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:20:56.278338    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:56.278632    6152 sshutil.go:53] new ssh client: &{IP:172.18.208.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\auto-952300\id_rsa Username:docker}
	I0528 16:20:56.387116    6152 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1425038s)
	I0528 16:20:56.387687    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0528 16:20:56.438477    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0528 16:20:56.487896    6152 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 16:20:56.536914    6152 provision.go:87] duration metric: took 15.6528457s to configureAuth
	I0528 16:20:56.536914    6152 buildroot.go:189] setting minikube options for container-runtime
	I0528 16:20:56.537915    6152 config.go:182] Loaded profile config "auto-952300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 16:20:56.537915    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:20:58.781704    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:20:58.781704    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:20:58.782754    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	I0528 16:21:01.421931    6152 main.go:141] libmachine: [stdout =====>] : 172.18.208.144
	
	I0528 16:21:01.421931    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:21:01.427698    6152 main.go:141] libmachine: Using SSH client type: native
	I0528 16:21:01.428603    6152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5ca4a0] 0x5cd080 <nil>  [] 0s} 172.18.208.144 22 <nil> <nil>}
	I0528 16:21:01.428603    6152 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 16:21:01.561427    6152 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0528 16:21:01.561529    6152 buildroot.go:70] root file system type: tmpfs
	I0528 16:21:01.561731    6152 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 16:21:01.561731    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM auto-952300 ).state
	I0528 16:21:03.764930    6152 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 16:21:03.764930    6152 main.go:141] libmachine: [stderr =====>] : 
	I0528 16:21:03.765114    6152 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM auto-952300 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.121692561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.121872262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.122647762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.122888862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.123024262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.123275062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.123884562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.133605065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.133677366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.133696566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.133875466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.153548672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.153653872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.153673072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:46 pause-143500 dockerd[4389]: time="2024-05-28T23:19:46.153856372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:50 pause-143500 cri-dockerd[4693]: time="2024-05-28T23:19:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.717108791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.717182091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.717237591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.718031091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.718550092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.720400292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.720464192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:19:50 pause-143500 dockerd[4389]: time="2024-05-28T23:19:50.721405793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 28 23:20:36 pause-143500 cri-dockerd[4693]: time="2024-05-28T23:20:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4dfaab71d9ba0       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   9f14c8ed5ddd1       coredns-7db6d8ff4d-cnlct
	2c5922bc67e4a       747097150317f       About a minute ago   Running             kube-proxy                2                   941b94057123a       kube-proxy-6x5kf
	b8abc09b3f448       25a1387cdab82       About a minute ago   Running             kube-controller-manager   2                   2ed004e6789e5       kube-controller-manager-pause-143500
	faa6b2bc19dd1       a52dc94f0a912       About a minute ago   Running             kube-scheduler            2                   ade24f5b9947e       kube-scheduler-pause-143500
	cad8dcf9ee579       91be940803172       About a minute ago   Running             kube-apiserver            2                   b1c24ab8f7224       kube-apiserver-pause-143500
	574861491c50b       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   bbe768271a122       etcd-pause-143500
	7a7e13a343227       cbb01a7bd410d       About a minute ago   Created             coredns                   1                   ad7e6c6df4735       coredns-7db6d8ff4d-cnlct
	dd8e3518bb026       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   f40c740b4f9cb       etcd-pause-143500
	ffc0dbf83b7ce       a52dc94f0a912       About a minute ago   Exited              kube-scheduler            1                   50ac9f2052cfe       kube-scheduler-pause-143500
	3935e199e14e7       91be940803172       About a minute ago   Exited              kube-apiserver            1                   489d31f2ac77d       kube-apiserver-pause-143500
	7104421cde4bb       25a1387cdab82       About a minute ago   Exited              kube-controller-manager   1                   a5548ec7313c6       kube-controller-manager-pause-143500
	cbdc0320cd1b9       747097150317f       About a minute ago   Exited              kube-proxy                1                   4191d1c1ad686       kube-proxy-6x5kf
	
	
	==> coredns [4dfaab71d9ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84fd659ad9ac94c01df8e848594b3a1ced14aceda7cd3c6e3545e3a962a1b1458ec39f5e77d24a8e8c54353ecf39f49081a6595b1a2779ec288b57d4bd17c052
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44038 - 59885 "HINFO IN 6211809350017855337.4642006506074492053. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022704707s
	
	
	==> coredns [7a7e13a34322] <==
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[ +14.212340] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.230455] kauditd_printk_skb: 12 callbacks suppressed
	[May28 23:19] systemd-fstab-generator[3955]: Ignoring "noauto" option for root device
	[  +0.169715] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.535446] systemd-fstab-generator[3990]: Ignoring "noauto" option for root device
	[  +0.300322] systemd-fstab-generator[4017]: Ignoring "noauto" option for root device
	[  +0.350812] systemd-fstab-generator[4031]: Ignoring "noauto" option for root device
	[  +5.292783] kauditd_printk_skb: 87 callbacks suppressed
	[  +8.286252] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[  +0.245624] systemd-fstab-generator[4585]: Ignoring "noauto" option for root device
	[  +0.240764] systemd-fstab-generator[4598]: Ignoring "noauto" option for root device
	[  +0.366792] systemd-fstab-generator[4616]: Ignoring "noauto" option for root device
	[  +1.028222] systemd-fstab-generator[4836]: Ignoring "noauto" option for root device
	[  +0.173771] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.107122] kauditd_printk_skb: 103 callbacks suppressed
	[  +1.959131] systemd-fstab-generator[6127]: Ignoring "noauto" option for root device
	[  +6.012201] kauditd_printk_skb: 54 callbacks suppressed
	[May28 23:20] systemd-fstab-generator[6611]: Ignoring "noauto" option for root device
	[  +0.182494] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.725881] systemd-fstab-generator[6679]: Ignoring "noauto" option for root device
	[  +0.166950] kauditd_printk_skb: 14 callbacks suppressed
	[ +21.849098] systemd-fstab-generator[6969]: Ignoring "noauto" option for root device
	[  +0.155147] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.074745] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.819377] systemd-fstab-generator[7069]: Ignoring "noauto" option for root device
	
	
	==> etcd [574861491c50] <==
	{"level":"warn","ts":"2024-05-28T23:20:01.809383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.531852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-143500\" ","response":"range_response_count:1 size:6669"}
	{"level":"info","ts":"2024-05-28T23:20:01.809823Z","caller":"traceutil/trace.go:171","msg":"trace[288499639] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-143500; range_end:; response_count:1; response_revision:534; }","duration":"171.086752ms","start":"2024-05-28T23:20:01.638719Z","end":"2024-05-28T23:20:01.809806Z","steps":["trace[288499639] 'agreement among raft nodes before linearized reading'  (duration: 169.519152ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T23:20:02.051756Z","caller":"traceutil/trace.go:171","msg":"trace[1172091977] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:625; }","duration":"211.911865ms","start":"2024-05-28T23:20:01.83982Z","end":"2024-05-28T23:20:02.051732Z","steps":["trace[1172091977] 'read index received'  (duration: 140.611143ms)","trace[1172091977] 'applied index is now lower than readState.Index'  (duration: 71.300022ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T23:20:02.051908Z","caller":"traceutil/trace.go:171","msg":"trace[594824545] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"219.225867ms","start":"2024-05-28T23:20:01.832668Z","end":"2024-05-28T23:20:02.051894Z","steps":["trace[594824545] 'process raft request'  (duration: 147.928645ms)","trace[594824545] 'compare'  (duration: 71.031422ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T23:20:02.052409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.571365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-05-28T23:20:02.052446Z","caller":"traceutil/trace.go:171","msg":"trace[799316342] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:536; }","duration":"212.640865ms","start":"2024-05-28T23:20:01.839797Z","end":"2024-05-28T23:20:02.052438Z","steps":["trace[799316342] 'agreement among raft nodes before linearized reading'  (duration: 212.473165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T23:20:02.052605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.878664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-143500\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2024-05-28T23:20:02.052657Z","caller":"traceutil/trace.go:171","msg":"trace[624974643] range","detail":"{range_begin:/registry/minions/pause-143500; range_end:; response_count:1; response_revision:536; }","duration":"208.946064ms","start":"2024-05-28T23:20:01.843702Z","end":"2024-05-28T23:20:02.052648Z","steps":["trace[624974643] 'agreement among raft nodes before linearized reading'  (duration: 208.869164ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T23:20:02.96321Z","caller":"traceutil/trace.go:171","msg":"trace[756619527] linearizableReadLoop","detail":"{readStateIndex:630; appliedIndex:628; }","duration":"105.555132ms","start":"2024-05-28T23:20:02.857637Z","end":"2024-05-28T23:20:02.963192Z","steps":["trace[756619527] 'read index received'  (duration: 82.390625ms)","trace[756619527] 'applied index is now lower than readState.Index'  (duration: 23.163807ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T23:20:02.964602Z","caller":"traceutil/trace.go:171","msg":"trace[1824146194] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"147.332044ms","start":"2024-05-28T23:20:02.817258Z","end":"2024-05-28T23:20:02.96459Z","steps":["trace[1824146194] 'process raft request'  (duration: 145.869444ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T23:20:02.963651Z","caller":"traceutil/trace.go:171","msg":"trace[73860901] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"150.127745ms","start":"2024-05-28T23:20:02.813509Z","end":"2024-05-28T23:20:02.963637Z","steps":["trace[73860901] 'process raft request'  (duration: 126.504338ms)","trace[73860901] 'compare'  (duration: 22.904907ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T23:20:02.963902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.247932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4139"}
	{"level":"info","ts":"2024-05-28T23:20:02.965345Z","caller":"traceutil/trace.go:171","msg":"trace[415992678] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:540; }","duration":"107.676633ms","start":"2024-05-28T23:20:02.857607Z","end":"2024-05-28T23:20:02.965284Z","steps":["trace[415992678] 'agreement among raft nodes before linearized reading'  (duration: 106.236132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T23:20:03.28349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.060139ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8226264255619206700 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:538 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T23:20:03.283581Z","caller":"traceutil/trace.go:171","msg":"trace[1371822935] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"214.767465ms","start":"2024-05-28T23:20:03.068796Z","end":"2024-05-28T23:20:03.283563Z","steps":["trace[1371822935] 'read index received'  (duration: 85.727026ms)","trace[1371822935] 'applied index is now lower than readState.Index'  (duration: 129.039139ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T23:20:03.283954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.154965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-143500\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2024-05-28T23:20:03.284015Z","caller":"traceutil/trace.go:171","msg":"trace[1017570035] range","detail":"{range_begin:/registry/minions/pause-143500; range_end:; response_count:1; response_revision:541; }","duration":"215.251165ms","start":"2024-05-28T23:20:03.068754Z","end":"2024-05-28T23:20:03.284005Z","steps":["trace[1017570035] 'agreement among raft nodes before linearized reading'  (duration: 215.152765ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T23:20:03.284322Z","caller":"traceutil/trace.go:171","msg":"trace[1566162238] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"305.767292ms","start":"2024-05-28T23:20:02.978498Z","end":"2024-05-28T23:20:03.284265Z","steps":["trace[1566162238] 'process raft request'  (duration: 175.868453ms)","trace[1566162238] 'compare'  (duration: 128.880839ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T23:20:03.285573Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T23:20:02.978387Z","time spent":"307.110992ms","remote":"127.0.0.1:58782","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4124,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:538 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-05-28T23:20:04.491173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.687432ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8226264255619206712 > lease_revoke:<id:72298fc17b7becc2>","response":"size:27"}
	{"level":"info","ts":"2024-05-28T23:20:04.590698Z","caller":"traceutil/trace.go:171","msg":"trace[1345004039] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:631; }","duration":"317.924295ms","start":"2024-05-28T23:20:04.272342Z","end":"2024-05-28T23:20:04.590266Z","steps":["trace[1345004039] 'read index received'  (duration: 110.757933ms)","trace[1345004039] 'applied index is now lower than readState.Index'  (duration: 207.110262ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T23:20:04.590918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.664995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-143500\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2024-05-28T23:20:04.591048Z","caller":"traceutil/trace.go:171","msg":"trace[723608503] range","detail":"{range_begin:/registry/minions/pause-143500; range_end:; response_count:1; response_revision:541; }","duration":"318.827895ms","start":"2024-05-28T23:20:04.272205Z","end":"2024-05-28T23:20:04.591033Z","steps":["trace[723608503] 'agreement among raft nodes before linearized reading'  (duration: 318.593495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T23:20:04.591658Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T23:20:04.272189Z","time spent":"318.956095ms","remote":"127.0.0.1:58506","response type":"/etcdserverpb.KV/Range","request count":0,"request size":32,"response count":1,"response size":4368,"request content":"key:\"/registry/minions/pause-143500\" "}
	{"level":"info","ts":"2024-05-28T23:20:11.744186Z","caller":"traceutil/trace.go:171","msg":"trace[6429233] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"161.310346ms","start":"2024-05-28T23:20:11.582855Z","end":"2024-05-28T23:20:11.744166Z","steps":["trace[6429233] 'process raft request'  (duration: 153.925144ms)"],"step_count":1}
	
	
	==> etcd [dd8e3518bb02] <==
	{"level":"warn","ts":"2024-05-28T23:19:40.568979Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-28T23:19:40.569089Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.18.215.196:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.18.215.196:2380","--initial-cluster=pause-143500=https://172.18.215.196:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.18.215.196:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.18.215.196:2380","--name=pause-143500","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-05-28T23:19:40.569189Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-05-28T23:19:40.569217Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-05-28T23:19:40.569228Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.18.215.196:2380"]}
	{"level":"info","ts":"2024-05-28T23:19:40.569266Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T23:19:40.57264Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.18.215.196:2379"]}
	{"level":"info","ts":"2024-05-28T23:19:40.578629Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-143500","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.18.215.196:2380"],"listen-peer-urls":["https://172.18.215.196:2380"],"advertise-client-urls":["https://172.18.215.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.18.215.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cl
uster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-05-28T23:19:40.649029Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"69.969056ms"}
	{"level":"info","ts":"2024-05-28T23:19:40.693589Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	
	==> kernel <==
	 23:21:24 up 9 min,  0 users,  load average: 0.64, 0.66, 0.33
	Linux pause-143500 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3935e199e14e] <==
	I0528 23:19:40.260228       1 options.go:221] external host was not specified, using 172.18.215.196
	I0528 23:19:40.261491       1 server.go:148] Version: v1.30.1
	I0528 23:19:40.261562       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0528 23:19:41.146888       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 23:19:41.147826       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0528 23:19:41.148138       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0528 23:19:41.151661       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 23:19:41.154200       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0528 23:19:41.154370       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0528 23:19:41.154537       1 instance.go:299] Using reconciler: lease
	W0528 23:19:41.156204       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 23:19:42.147670       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 23:19:42.149101       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 23:19:42.157060       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [cad8dcf9ee57] <==
	I0528 23:19:50.141117       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 23:19:50.152689       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 23:19:50.153126       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 23:19:50.156066       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 23:19:50.156193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 23:19:50.157540       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 23:19:50.157634       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 23:19:50.157664       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 23:19:50.157891       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 23:19:50.157968       1 policy_source.go:224] refreshing policies
	I0528 23:19:50.158034       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 23:19:50.159252       1 aggregator.go:165] initial CRD sync complete...
	I0528 23:19:50.160113       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 23:19:50.160227       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 23:19:50.160260       1 cache.go:39] Caches are synced for autoregister controller
	I0528 23:19:50.197125       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0528 23:19:50.211931       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0528 23:19:50.983163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 23:19:52.102745       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 23:19:52.126869       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 23:19:52.229626       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 23:19:52.354800       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 23:19:52.382664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 23:20:02.812690       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 23:20:02.816208       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7104421cde4b] <==
	
	
	==> kube-controller-manager [b8abc09b3f44] <==
	I0528 23:20:02.722184       1 shared_informer.go:320] Caches are synced for persistent volume
	I0528 23:20:02.722974       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0528 23:20:02.723191       1 shared_informer.go:320] Caches are synced for PVC protection
	I0528 23:20:02.727357       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0528 23:20:02.727691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.9µs"
	I0528 23:20:02.727016       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 23:20:02.727719       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0528 23:20:02.730646       1 shared_informer.go:320] Caches are synced for cronjob
	I0528 23:20:02.731808       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-143500"
	I0528 23:20:02.732255       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0528 23:20:02.741467       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0528 23:20:02.745476       1 shared_informer.go:320] Caches are synced for endpoint
	I0528 23:20:02.747742       1 shared_informer.go:320] Caches are synced for stateful set
	I0528 23:20:02.753342       1 shared_informer.go:320] Caches are synced for crt configmap
	I0528 23:20:02.753661       1 shared_informer.go:320] Caches are synced for disruption
	I0528 23:20:02.807376       1 shared_informer.go:320] Caches are synced for attach detach
	I0528 23:20:02.850378       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0528 23:20:02.932217       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 23:20:02.962094       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 23:20:03.396087       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 23:20:03.396188       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 23:20:03.432006       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 23:20:41.488569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.701513ms"
	I0528 23:20:41.517735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.856607ms"
	I0528 23:20:41.519861       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.7µs"
	
	
	==> kube-proxy [2c5922bc67e4] <==
	I0528 23:19:50.897223       1 server_linux.go:69] "Using iptables proxy"
	I0528 23:19:50.962936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.18.215.196"]
	I0528 23:19:51.047575       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 23:19:51.047632       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 23:19:51.047653       1 server_linux.go:165] "Using iptables Proxier"
	I0528 23:19:51.051052       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 23:19:51.051523       1 server.go:872] "Version info" version="v1.30.1"
	I0528 23:19:51.052097       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 23:19:51.053775       1 config.go:192] "Starting service config controller"
	I0528 23:19:51.054162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 23:19:51.054546       1 config.go:101] "Starting endpoint slice config controller"
	I0528 23:19:51.054756       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 23:19:51.055601       1 config.go:319] "Starting node config controller"
	I0528 23:19:51.055858       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 23:19:51.156211       1 shared_informer.go:320] Caches are synced for node config
	I0528 23:19:51.156395       1 shared_informer.go:320] Caches are synced for service config
	I0528 23:19:51.156437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cbdc0320cd1b] <==
	I0528 23:19:39.482723       1 server_linux.go:69] "Using iptables proxy"
	E0528 23:19:39.490188       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-143500\": dial tcp 172.18.215.196:8443: connect: connection refused"
	
	
	==> kube-scheduler [faa6b2bc19dd] <==
	I0528 23:19:47.793749       1 serving.go:380] Generated self-signed cert in-memory
	W0528 23:19:50.074125       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 23:19:50.074262       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 23:19:50.074278       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 23:19:50.074287       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 23:19:50.110892       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 23:19:50.110947       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 23:19:50.113869       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 23:19:50.114200       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 23:19:50.113890       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 23:19:50.114718       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 23:19:50.216413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ffc0dbf83b7c] <==
	
	
	==> kubelet <==
	May 28 23:20:37 pause-143500 kubelet[6976]: E0528 23:20:37.032683    6976 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-143500\" already exists" pod="kube-system/etcd-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.038906    6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ac9f2052cfe63af7a2e3d71c4fa891cd8b65de26f4e0a171f077e13c5bb152"
	May 28 23:20:37 pause-143500 kubelet[6976]: E0528 23:20:37.061597    6976 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-143500\" already exists" pod="kube-system/kube-scheduler-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.065592    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/805829f8e70e7452f13dcd5585b153ae-etcd-certs\") pod \"etcd-pause-143500\" (UID: \"805829f8e70e7452f13dcd5585b153ae\") " pod="kube-system/etcd-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.065849    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/402d5a2a07197ddbb96e0e0393cae5b8-k8s-certs\") pod \"kube-apiserver-pause-143500\" (UID: \"402d5a2a07197ddbb96e0e0393cae5b8\") " pod="kube-system/kube-apiserver-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.065950    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68465f42bf2620687eb6d658be072560-ca-certs\") pod \"kube-controller-manager-pause-143500\" (UID: \"68465f42bf2620687eb6d658be072560\") " pod="kube-system/kube-controller-manager-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066040    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68465f42bf2620687eb6d658be072560-flexvolume-dir\") pod \"kube-controller-manager-pause-143500\" (UID: \"68465f42bf2620687eb6d658be072560\") " pod="kube-system/kube-controller-manager-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066077    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9415fd1ed35511af214abefec5d21b92-kubeconfig\") pod \"kube-scheduler-pause-143500\" (UID: \"9415fd1ed35511af214abefec5d21b92\") " pod="kube-system/kube-scheduler-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066100    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/805829f8e70e7452f13dcd5585b153ae-etcd-data\") pod \"etcd-pause-143500\" (UID: \"805829f8e70e7452f13dcd5585b153ae\") " pod="kube-system/etcd-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066120    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/402d5a2a07197ddbb96e0e0393cae5b8-ca-certs\") pod \"kube-apiserver-pause-143500\" (UID: \"402d5a2a07197ddbb96e0e0393cae5b8\") " pod="kube-system/kube-apiserver-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066140    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/402d5a2a07197ddbb96e0e0393cae5b8-usr-share-ca-certificates\") pod \"kube-apiserver-pause-143500\" (UID: \"402d5a2a07197ddbb96e0e0393cae5b8\") " pod="kube-system/kube-apiserver-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066161    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68465f42bf2620687eb6d658be072560-k8s-certs\") pod \"kube-controller-manager-pause-143500\" (UID: \"68465f42bf2620687eb6d658be072560\") " pod="kube-system/kube-controller-manager-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066180    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68465f42bf2620687eb6d658be072560-kubeconfig\") pod \"kube-controller-manager-pause-143500\" (UID: \"68465f42bf2620687eb6d658be072560\") " pod="kube-system/kube-controller-manager-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.066227    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68465f42bf2620687eb6d658be072560-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-143500\" (UID: \"68465f42bf2620687eb6d658be072560\") " pod="kube-system/kube-controller-manager-pause-143500"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.594473    6976 apiserver.go:52] "Watching apiserver"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.598605    6976 topology_manager.go:215] "Topology Admit Handler" podUID="28de2bb3-953e-4c97-b469-0db620445bdd" podNamespace="kube-system" podName="kube-proxy-6x5kf"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.598970    6976 topology_manager.go:215] "Topology Admit Handler" podUID="401d6683-7f4b-433b-85ea-6a1de4c1894a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cnlct"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.624827    6976 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.674582    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28de2bb3-953e-4c97-b469-0db620445bdd-xtables-lock\") pod \"kube-proxy-6x5kf\" (UID: \"28de2bb3-953e-4c97-b469-0db620445bdd\") " pod="kube-system/kube-proxy-6x5kf"
	May 28 23:20:37 pause-143500 kubelet[6976]: I0528 23:20:37.674699    6976 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28de2bb3-953e-4c97-b469-0db620445bdd-lib-modules\") pod \"kube-proxy-6x5kf\" (UID: \"28de2bb3-953e-4c97-b469-0db620445bdd\") " pod="kube-system/kube-proxy-6x5kf"
	May 28 23:20:38 pause-143500 kubelet[6976]: E0528 23:20:38.080520    6976 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-143500\" already exists" pod="kube-system/kube-scheduler-pause-143500"
	May 28 23:20:38 pause-143500 kubelet[6976]: E0528 23:20:38.088835    6976 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-pause-143500\" already exists" pod="kube-system/etcd-pause-143500"
	May 28 23:20:44 pause-143500 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 28 23:20:44 pause-143500 systemd[1]: kubelet.service: Deactivated successfully.
	May 28 23:20:44 pause-143500 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 16:21:05.831385    4048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-143500 -n pause-143500
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-143500 -n pause-143500: exit status 2 (12.7076463s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 16:21:24.640249    2756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-143500" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-143500 -n pause-143500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-143500 -n pause-143500: exit status 2 (13.3119982s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 16:21:37.346429    8796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/DeletePaused FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/DeletePaused]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-143500 logs -n 25
panic: test timed out after 3h0m0s
running tests:
	TestKubernetesUpgrade (19m22s)
	TestNetworkPlugins (28m10s)
	TestNetworkPlugins/group/auto (4m30s)
	TestNetworkPlugins/group/auto/Start (4m30s)
	TestNetworkPlugins/group/kindnet (1m41s)
	TestNetworkPlugins/group/kindnet/Start (1m41s)
	TestPause (16m3s)
	TestPause/serial (16m3s)
	TestPause/serial/DeletePaused (1m11s)
	TestStartStop (16m3s)

                                                
                                                
goroutine 2470 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008f6820, 0xc00090fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006ba090, {0x50d9f80, 0x2a, 0x2a}, {0xc00082e060?, 0xc00090fc30?, 0x50fd240?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006a2000)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006a2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 14 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070e80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 161 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3d3c220, 0xc000106360}, 0xc000485f50, 0xc000485f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3d3c220, 0xc000106360}, 0xf8?, 0xc000485f50, 0xc000485f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3d3c220?, 0xc000106360?}, 0xc0000b65f8?, 0x249c9b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b65a0?, 0xc0000b65a0?, 0xc0000b65a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 174
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2203 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f6680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f6680, 0xc0019cc300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 23 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 22
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2095 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0015991e0, {0x2cb6356?, 0xb0f48d?}, 0xc0005604b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0015991e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0015991e0, 0x37c22d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2204 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f6820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f6820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f6820, 0xc0019cc380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 160 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc00084af90, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x27ac500?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000d80a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00084afc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000234dc0, {0x3d18760, 0xc000576240}, 0x1, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000234dc0, 0x3b9aca00, 0x0, 0x1, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 174
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2368 [syscall, locked to thread]:
syscall.SyscallN(0x1c66fd079f0?, {0xc00180fb20?, 0xab7ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1c66fd079f0?, 0xc00180fb80?, 0xaafdd6?, 0x518a6a0?, 0xc00180fc08?, 0xaa2985?, 0x1c64a620a28?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x684, {0xc001443a12?, 0x5ee, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00091e508?, {0xc001443a12?, 0xadc1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00091e508, {0xc001443a12, 0x5ee, 0x5ee})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011d5a0, {0xc001443a12?, 0xc00180fd98?, 0x7e49?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000760a50, {0x3d17320, 0xc000a0e178})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000760a50}, {0x3d17320, 0xc000a0e178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3d17460, 0xc000760a50})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000760a50?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000760a50}, {0x3d173e0, 0xc00011d5a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0019da600?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2366
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1059 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc000cf0420, 0xc00181c1e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1058
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 173 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d80b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 96
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 174 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00084afc0, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 96
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 178 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2418 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc00160e420, 0xc000055020)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2184
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 771 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x1c66fe69460, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000926408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000a12ca0, 0xc001a33bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000a12c88, 0x320, {0xc00047c5a0?, 0x0?, 0x0?}, 0xc000926008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000a12c88, 0xc001a33d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000a12c88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0000aae80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0000aae80)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006860f0, {0x3d2f2c0, 0xc0000aae80})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0006860f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0015989c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 752
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2209 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f7040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f7040, 0xc0019cc600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2208 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f6ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f6ea0, 0xc0019cc580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1184 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b58c60, 0xc001c4e120)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 898
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2205 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f69c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f69c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f69c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f69c0, 0xc0019cc400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 990 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a073e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2331 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001599380, 0x37c24f8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2180
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2384 [syscall, locked to thread]:
syscall.SyscallN(0xc001a6bce8?, {0xc0014dfb20?, 0xab7ea5?, 0x518a6a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1?, 0xc0014dfb80?, 0xaafdd6?, 0x518a6a0?, 0xc0014dfc08?, 0xaa2985?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x548, {0xc000d3c24f?, 0x5b1, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a12008?, {0xc000d3c24f?, 0x13b?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a12008, {0xc000d3c24f, 0x5b1, 0x5b1})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0e338, {0xc000d3c24f?, 0xd?, 0x20e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000576f90, {0x3d17320, 0xc00082a070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000576f90}, {0x3d17320, 0xc00082a070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4fff9e0?, {0x3d17460, 0xc000576f90})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000576f90?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000576f90}, {0x3d173e0, 0xc000a0e338}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x37c22a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2184
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2367 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0014e9b20?, 0xab7ea5?, 0x518a6a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0014e9b98?, 0xc0014e9b80?, 0xaafdd6?, 0x518a6a0?, 0xc0014e9c08?, 0xaa2985?, 0x1c64a620598?, 0xc0014e9b4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x668, {0xc000d3ca59?, 0x5a7, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0009e3408?, {0xc000d3ca59?, 0x0?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0009e3408, {0xc000d3ca59, 0x5a7, 0x5a7})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011d4d8, {0xc000d3ca59?, 0xc001a2dc00?, 0x224?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000760960, {0x3d17320, 0xc0004d0028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000760960}, {0x3d17320, 0xc0004d0028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0014e9e78?, {0x3d17460, 0xc000760960})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000760960?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000760960}, {0x3d173e0, 0xc00011d4d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001c4e180?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2366
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 991 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0000b5d40, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2443 [syscall, 1 minutes, locked to thread]:
syscall.SyscallN(0xc0015a9b00?, {0xc0015a9b20?, 0xab7ea5?, 0x518a6a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00183a059?, 0xc0015a9b80?, 0xaafdd6?, 0x518a6a0?, 0xc0015a9c08?, 0xaa2985?, 0x1c64a620a28?, 0xb18c67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x640, {0xc000283d40?, 0x2c0, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00091ec88?, {0xc000283d40?, 0xadc1be?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00091ec88, {0xc000283d40, 0x2c0, 0x2c0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00068ae80, {0xc000283d40?, 0x1c66fae19a8?, 0xe1c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000576b70, {0x3d17320, 0xc00011cf90})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000576b70}, {0x3d17320, 0xc00011cf90}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3d17460, 0xc000576b70})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000576b70?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000576b70}, {0x3d173e0, 0xc00068ae80}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000054120?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2459 [syscall, locked to thread]:
syscall.SyscallN(0xc0015abb00?, {0xc0015abb20?, 0xab7ea5?, 0x518a6a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001839880?, 0xc0015abb80?, 0xaafdd6?, 0x518a6a0?, 0xc0015abc08?, 0xaa281b?, 0xa98ba6?, 0xb18c41?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x65c, {0xc000855d3a?, 0x2c6, 0xc000855c00?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0015a6008?, {0xc000855d3a?, 0xadc1be?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0015a6008, {0xc000855d3a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00068af50, {0xc000855d3a?, 0xc0015abd98?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018708d0, {0x3d17320, 0xc00068af70})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc0018708d0}, {0x3d17320, 0xc00068af70}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3d17460, 0xc0018708d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc0018708d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc0018708d0}, {0x3d173e0, 0xc00068af50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00098dd70?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2449
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 998 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0000b5cd0, 0x35)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x27ac500?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a072c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0000b5d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00194a560, {0x3d18760, 0xc001b165d0}, 0x1, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00194a560, 0x3b9aca00, 0x0, 0x1, 0xc000106360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 991
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 999 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3d3c220, 0xc000106360}, 0xc001609f50, 0xc001609f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3d3c220, 0xc000106360}, 0xa0?, 0xc001609f50, 0xc001609f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3d3c220?, 0xc000106360?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001609fd0?, 0xc2e404?, 0xc0009738c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 991
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2207 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f6d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f6d00, 0xc0019cc500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2385 [syscall, locked to thread]:
syscall.SyscallN(0xc0016b1ce8?, {0xc0016b1b20?, 0xab7ea5?, 0x518a6a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x67?, 0xc0016b1b80?, 0xaafdd6?, 0x518a6a0?, 0xc0016b1c08?, 0xaa2985?, 0x1c64a620a28?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5d4, {0xc000d5bc50?, 0x3b0, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc000a12508?, {0xc000d5bc50?, 0x13b?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc000a12508, {0xc000d5bc50, 0x3b0, 0x3b0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a0e3a8, {0xc000d5bc50?, 0x12?, 0x1e30?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000577050, {0x3d17320, 0xc0004d0040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000577050}, {0x3d17320, 0xc0004d0040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4fff9e0?, {0x3d17460, 0xc000577050})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000577050?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000577050}, {0x3d173e0, 0xc000a0e3a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x37c22a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2184
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1000 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 999
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2097 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0015996c0, {0x2cb7869?, 0xd18c2e2800?}, 0xc0016b6270)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc0015996c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc0015996c0, 0x37c22f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2458 [syscall, locked to thread]:
syscall.SyscallN(0x1c66fcf7e28?, {0xc00164db20?, 0xab7ea5?, 0x8?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1c66fcf7e28?, 0xc00164db80?, 0xaafdd6?, 0x518a6a0?, 0xc00164dc08?, 0xaa2985?, 0x1c64a620eb8?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x610, {0xc001417fd7?, 0x2029, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00091f908?, {0xc001417fd7?, 0xadc1be?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00091f908, {0xc001417fd7, 0x2029, 0x2029})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00068af20, {0xc001417fd7?, 0xc00164dd98?, 0x8000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018707e0, {0x3d17320, 0xc000a0e020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc0018707e0}, {0x3d17320, 0xc000a0e020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3d17460, 0xc0018707e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc0018707e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc0018707e0}, {0x3d173e0, 0xc00068af20}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000054f60?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2449
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2332 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001599860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001599860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001599860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001599860, 0xc00023e740)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2366 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ff843424de0?, {0xc00141dbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6c0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017326f0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000d42000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000d42000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000655860, 0xc000d42000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000655860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000655860, 0xc000760660)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2201
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2180 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001599ba0, {0x2cb6356?, 0xbe7333?}, 0x37c24f8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001599ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001599ba0, 0x37c2320)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2405 [syscall, 1 minutes, locked to thread]:
syscall.SyscallN(0x7ff843424de0?, {0xc00180dbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6a8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001732780)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000d42160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000d42160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014ba000, 0xc000d42160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0014ba000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0014ba000, 0xc0005763f0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2206
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2192 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0014ba1a0, 0xc0005604b0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2202 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015f64e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015f64e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f64e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015f64e0, 0xc0019cc180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2369 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000d42000, 0xc000972180)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2366
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2206 [chan receive, 1 minutes]:
testing.(*T).Run(0xc0015f6b60, {0x2cb635b?, 0x3d112d8?}, 0xc0005763f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015f6b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0015f6b60, 0xc0019cc480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2201 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0001616c0, {0x2cb635b?, 0x3d112d8?}, 0xc000760660)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001616c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0001616c0, 0xc0019cc100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2184 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ff843424de0?, {0xc001a6b798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x62c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0018a25a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00160e420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00160e420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015f6340, 0xc00160e420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0015f6340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:275 +0x1445
testing.tRunner(0xc0015f6340, 0x37c22a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2444 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc000d42160, 0xc000054360)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2333 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001599a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001599a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001599a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001599a00, 0xc00023e780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2334 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001599d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001599d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001599d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001599d40, 0xc00023e7c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2335 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000654b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000654b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000654b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000654b60, 0xc00023e980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2336 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000654ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000654ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000654ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000654ea0, 0xc00023ea00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2337 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0006fb590)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000655380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000655380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000655380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000655380, 0xc00023ec80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2331
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2370 [chan receive, 1 minutes]:
testing.(*T).Run(0xc0006556c0, {0x2cc53c5?, 0x24?}, 0xc0019da200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0006556c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0006556c0, 0xc0016b6270)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2097
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2449 [syscall, locked to thread]:
syscall.SyscallN(0x7ff843424de0?, {0xc001a677f8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x648, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc001732e70)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00160e000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00160e000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008f7d40, 0xc00160e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.PostMortemLogs(0xc0008f7d40, {0xc00191a280, 0xc}, {0x0, 0x0, 0xc0014dbf60?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:247 +0x7ce
k8s.io/minikube/test/integration.TestPause.func1.1(0xc0008f7d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:68 +0x7a
testing.tRunner(0xc0008f7d40, 0xc0019da200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2370
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2442 [syscall, 1 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0016d7b20?, 0xab7ea5?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xaa2cf9?, 0xc0016d7b80?, 0xaafdd6?, 0x518a6a0?, 0xc0016d7c08?, 0xaa281b?, 0xa98ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x690, {0xc0008fcdef?, 0x211, 0xb5417f?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00091e788?, {0xc0008fcdef?, 0xadc171?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00091e788, {0xc0008fcdef, 0x211, 0x211})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00068ae00, {0xc0008fcdef?, 0xc001aaa1c0?, 0x6a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000576a20, {0x3d17320, 0xc000a0e0c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3d17460, 0xc000576a20}, {0x3d17320, 0xc000a0e0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0016d7e78?, {0x3d17460, 0xc000576a20})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x508db70?, {0x3d17460?, 0xc000576a20?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3d17460, 0xc000576a20}, {0x3d173e0, 0xc00068ae00}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001c4e5a0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2405
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                    

Test pass (157/200)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.94
4 TestDownloadOnly/v1.20.0/preload-exists 0.05
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.41
9 TestDownloadOnly/v1.20.0/DeleteAll 1.27
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.23
12 TestDownloadOnly/v1.30.1/json-events 10.69
13 TestDownloadOnly/v1.30.1/preload-exists 0.03
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.36
18 TestDownloadOnly/v1.30.1/DeleteAll 1.27
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 1.25
21 TestBinaryMirror 6.88
22 TestOffline 292.74
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 441.42
30 TestAddons/parallel/Ingress 66.5
31 TestAddons/parallel/InspektorGadget 34.17
32 TestAddons/parallel/MetricsServer 21.11
33 TestAddons/parallel/HelmTiller 30.13
35 TestAddons/parallel/CSI 83.81
36 TestAddons/parallel/Headlamp 37.14
37 TestAddons/parallel/CloudSpanner 21.72
38 TestAddons/parallel/LocalPath 88.04
39 TestAddons/parallel/NvidiaDevicePlugin 21.93
40 TestAddons/parallel/Yakd 5.02
41 TestAddons/parallel/Volcano 65.62
44 TestAddons/serial/GCPAuth/Namespaces 0.34
45 TestAddons/StoppedEnableDisable 54.66
46 TestCertOptions 401.88
47 TestCertExpiration 1028.18
48 TestDockerFlags 577.07
49 TestForceSystemdFlag 395.37
50 TestForceSystemdEnv 511.61
57 TestErrorSpam/start 17.52
58 TestErrorSpam/status 37.62
59 TestErrorSpam/pause 23.14
60 TestErrorSpam/unpause 23.45
61 TestErrorSpam/stop 60.26
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 238.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 130.05
68 TestFunctional/serial/KubeContext 0.13
69 TestFunctional/serial/KubectlGetPods 0.23
72 TestFunctional/serial/CacheCmd/cache/add_remote 26.32
73 TestFunctional/serial/CacheCmd/cache/add_local 11.12
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.19
75 TestFunctional/serial/CacheCmd/cache/list 0.18
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.49
77 TestFunctional/serial/CacheCmd/cache/cache_reload 36.85
78 TestFunctional/serial/CacheCmd/cache/delete 0.37
79 TestFunctional/serial/MinikubeKubectlCmd 0.44
81 TestFunctional/serial/ExtraConfig 128.22
82 TestFunctional/serial/ComponentHealth 0.18
83 TestFunctional/serial/LogsCmd 8.5
84 TestFunctional/serial/LogsFileCmd 10.87
85 TestFunctional/serial/InvalidService 21.44
91 TestFunctional/parallel/StatusCmd 43.49
95 TestFunctional/parallel/ServiceCmdConnect 36.71
96 TestFunctional/parallel/AddonsCmd 0.85
97 TestFunctional/parallel/PersistentVolumeClaim 47.03
99 TestFunctional/parallel/SSHCmd 20.12
100 TestFunctional/parallel/CpCmd 54.8
101 TestFunctional/parallel/MySQL 64.61
102 TestFunctional/parallel/FileSync 12.19
103 TestFunctional/parallel/CertSync 66.2
107 TestFunctional/parallel/NodeLabels 0.26
109 TestFunctional/parallel/NonActiveRuntimeDisabled 12.65
111 TestFunctional/parallel/License 4.03
112 TestFunctional/parallel/DockerEnv/powershell 49.21
113 TestFunctional/parallel/ImageCommands/ImageListShort 7.79
114 TestFunctional/parallel/ImageCommands/ImageListTable 7.8
115 TestFunctional/parallel/ImageCommands/ImageListJson 7.73
116 TestFunctional/parallel/ImageCommands/ImageListYaml 7.78
117 TestFunctional/parallel/ImageCommands/ImageBuild 27
118 TestFunctional/parallel/ImageCommands/Setup 4.65
119 TestFunctional/parallel/Version/short 0.17
120 TestFunctional/parallel/Version/components 8.39
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 26
122 TestFunctional/parallel/UpdateContextCmd/no_changes 2.53
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.53
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.57
125 TestFunctional/parallel/ProfileCmd/profile_not_create 11.63
126 TestFunctional/parallel/ProfileCmd/profile_list 11.86
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.98
128 TestFunctional/parallel/ProfileCmd/profile_json_output 11.68
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.83
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 27.63
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 29.82
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.51
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ServiceCmd/DeployApp 8.69
143 TestFunctional/parallel/ImageCommands/ImageRemove 16.71
144 TestFunctional/parallel/ServiceCmd/List 14.73
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 19.98
146 TestFunctional/parallel/ServiceCmd/JSONOutput 14.47
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.01
151 TestFunctional/delete_addon-resizer_images 0.48
152 TestFunctional/delete_my-image_image 0.19
153 TestFunctional/delete_minikube_cached_images 0.18
157 TestMultiControlPlane/serial/StartCluster 735.48
158 TestMultiControlPlane/serial/DeployApp 10.66
160 TestMultiControlPlane/serial/AddWorkerNode 260.42
161 TestMultiControlPlane/serial/NodeLabels 0.19
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.78
163 TestMultiControlPlane/serial/CopyFile 648.54
167 TestImageBuild/serial/Setup 198.21
168 TestImageBuild/serial/NormalBuild 9.67
169 TestImageBuild/serial/BuildWithBuildArg 9.05
170 TestImageBuild/serial/BuildWithDockerIgnore 7.72
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.7
175 TestJSONOutput/start/Command 208.91
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 7.87
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 7.74
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 39.42
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 1.27
203 TestMainNoArgs 0.17
204 TestMinikubeProfile 526.91
207 TestMountStart/serial/StartWithMountFirst 155.97
208 TestMountStart/serial/VerifyMountFirst 9.5
209 TestMountStart/serial/StartWithMountSecond 156.46
210 TestMountStart/serial/VerifyMountSecond 9.63
211 TestMountStart/serial/DeleteFirst 30.8
212 TestMountStart/serial/VerifyMountPostDelete 9.31
213 TestMountStart/serial/Stop 31.09
214 TestMountStart/serial/RestartStopped 118.84
215 TestMountStart/serial/VerifyMountPostStop 9.52
218 TestMultiNode/serial/FreshStart2Nodes 425.1
219 TestMultiNode/serial/DeployApp2Nodes 8.5
221 TestMultiNode/serial/AddNode 228.42
222 TestMultiNode/serial/MultiNodeLabels 0.18
223 TestMultiNode/serial/ProfileList 9.98
224 TestMultiNode/serial/CopyFile 369.66
225 TestMultiNode/serial/StopNode 76.87
226 TestMultiNode/serial/StartAfterStop 183.56
231 TestPreload 547
232 TestScheduledStopWindows 331.09
237 TestRunningBinaryUpgrade 1025.96
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.3
255 TestStoppedBinaryUpgrade/Setup 0.69
256 TestStoppedBinaryUpgrade/Upgrade 856.04
268 TestStoppedBinaryUpgrade/MinikubeLogs 10.14
x
+
TestDownloadOnly/v1.20.0/json-events (22.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-696000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-696000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (22.9386795s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-696000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-696000: exit status 85 (406.6254ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:21 PDT |          |
	|         | -p download-only-696000        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 13:21:55
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 13:21:55.965917   10500 out.go:291] Setting OutFile to fd 632 ...
	I0528 13:21:55.967742   10500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:21:55.967742   10500 out.go:304] Setting ErrFile to fd 636...
	I0528 13:21:55.967742   10500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 13:21:55.982340   10500 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0528 13:21:55.994807   10500 out.go:298] Setting JSON to true
	I0528 13:21:56.000022   10500 start.go:129] hostinfo: {"hostname":"minikube1","uptime":480,"bootTime":1716927235,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:21:56.000022   10500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:21:56.008375   10500 out.go:97] [download-only-696000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	W0528 13:21:56.008620   10500 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0528 13:21:56.008620   10500 notify.go:220] Checking for updates...
	I0528 13:21:56.011474   10500 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:21:56.014615   10500 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:21:56.018131   10500 out.go:169] MINIKUBE_LOCATION=18966
	I0528 13:21:56.018832   10500 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0528 13:21:56.023839   10500 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 13:21:56.026573   10500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 13:22:01.155061   10500 out.go:97] Using the hyperv driver based on user configuration
	I0528 13:22:01.155275   10500 start.go:297] selected driver: hyperv
	I0528 13:22:01.155275   10500 start.go:901] validating driver "hyperv" against <nil>
	I0528 13:22:01.155745   10500 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 13:22:01.203091   10500 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0528 13:22:01.203772   10500 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 13:22:01.203772   10500 cni.go:84] Creating CNI manager for ""
	I0528 13:22:01.203772   10500 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0528 13:22:01.203772   10500 start.go:340] cluster config:
	{Name:download-only-696000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-696000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:22:01.205041   10500 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 13:22:01.206539   10500 out.go:97] Downloading VM boot image ...
	I0528 13:22:01.206539   10500 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 13:22:11.868279   10500 out.go:97] Starting "download-only-696000" primary control-plane node in "download-only-696000" cluster
	I0528 13:22:11.868279   10500 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 13:22:11.910900   10500 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0528 13:22:11.915244   10500 cache.go:56] Caching tarball of preloaded images
	I0528 13:22:11.916164   10500 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 13:22:11.918991   10500 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0528 13:22:11.919073   10500 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0528 13:22:11.981779   10500 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0528 13:22:15.320371   10500 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0528 13:22:15.325371   10500 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-696000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-696000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:22:18.947429    7780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2612944s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-696000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-696000: (1.2232108s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (10.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-761200 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-761200 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperv: (10.6842834s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (10.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-761200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-761200: exit status 85 (356.1801ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:21 PDT |                     |
	|         | -p download-only-696000        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| delete  | -p download-only-696000        | download-only-696000 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT | 28 May 24 13:22 PDT |
	| start   | -o=json --download-only        | download-only-761200 | minikube1\jenkins | v1.33.1 | 28 May 24 13:22 PDT |                     |
	|         | -p download-only-761200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 13:22:21
	Running on machine: minikube1
	Binary: Built with gc go1.22.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 13:22:21.873836    3656 out.go:291] Setting OutFile to fd 664 ...
	I0528 13:22:21.874507    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:22:21.874507    3656 out.go:304] Setting ErrFile to fd 580...
	I0528 13:22:21.874507    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:22:21.897853    3656 out.go:298] Setting JSON to true
	I0528 13:22:21.904720    3656 start.go:129] hostinfo: {"hostname":"minikube1","uptime":506,"bootTime":1716927235,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:22:21.904829    3656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:22:21.910507    3656 out.go:97] [download-only-761200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 13:22:21.910507    3656 notify.go:220] Checking for updates...
	I0528 13:22:21.912818    3656 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:22:21.915802    3656 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:22:21.918178    3656 out.go:169] MINIKUBE_LOCATION=18966
	I0528 13:22:21.922282    3656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0528 13:22:21.927250    3656 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 13:22:21.928254    3656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 13:22:27.136696    3656 out.go:97] Using the hyperv driver based on user configuration
	I0528 13:22:27.139293    3656 start.go:297] selected driver: hyperv
	I0528 13:22:27.139293    3656 start.go:901] validating driver "hyperv" against <nil>
	I0528 13:22:27.139848    3656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 13:22:27.191443    3656 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0528 13:22:27.192841    3656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 13:22:27.192926    3656 cni.go:84] Creating CNI manager for ""
	I0528 13:22:27.192926    3656 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 13:22:27.193025    3656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 13:22:27.193221    3656 start.go:340] cluster config:
	{Name:download-only-761200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-761200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 13:22:27.193530    3656 iso.go:125] acquiring lock: {Name:mk309f19768252c5551da1c999633aa2617813e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 13:22:27.197784    3656 out.go:97] Starting "download-only-761200" primary control-plane node in "download-only-761200" cluster
	I0528 13:22:27.197864    3656 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:22:27.241212    3656 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 13:22:27.242989    3656 cache.go:56] Caching tarball of preloaded images
	I0528 13:22:27.243478    3656 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:22:27.246577    3656 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0528 13:22:27.246673    3656 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0528 13:22:27.326318    3656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0528 13:22:30.520802    3656 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0528 13:22:30.524889    3656 preload.go:255] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0528 13:22:31.403468    3656 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 13:22:31.409605    3656 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-761200\config.json ...
	I0528 13:22:31.410247    3656 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-761200\config.json: {Name:mk9120e8b563f7cd2f5b900383ec150387146299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 13:22:31.410449    3656 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 13:22:31.411806    3656 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.30.1/kubectl.exe
	
	
	* The control-plane node download-only-761200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-761200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:22:32.589518    9876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2615949s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-761200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-761200: (1.2418441s)
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (1.25s)

                                                
                                    
x
+
TestBinaryMirror (6.88s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-084900 --alsologtostderr --binary-mirror http://127.0.0.1:52656 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-084900 --alsologtostderr --binary-mirror http://127.0.0.1:52656 --driver=hyperv: (6.0559098s)
helpers_test.go:175: Cleaning up "binary-mirror-084900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-084900
--- PASS: TestBinaryMirror (6.88s)

                                                
                                    
x
+
TestOffline (292.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-390500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-390500 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m7.2009956s)
helpers_test.go:175: Cleaning up "offline-docker-390500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-390500
E0528 15:53:09.718220    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 15:53:29.676025    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-390500: (45.538038s)
--- PASS: TestOffline (292.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-854100
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-854100: exit status 85 (182.0169ms)

                                                
                                                
-- stdout --
	* Profile "addons-854100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-854100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:22:44.913597    4772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-854100
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-854100: exit status 85 (195.0561ms)

                                                
                                                
-- stdout --
	* Profile "addons-854100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-854100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:22:44.909275   10388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (441.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-854100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-854100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m21.4215468s)
--- PASS: TestAddons/Setup (441.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (66.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-854100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-854100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-854100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [47224d6b-8166-43c0-a7f6-9d80e7ff508c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [47224d6b-8166-43c0-a7f6-9d80e7ff508c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0144943s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.5750276s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-854100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0528 13:31:42.567282    1320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-854100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 ip: (2.4711002s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.18.209.103
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable ingress-dns --alsologtostderr -v=1: (15.8730221s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable ingress --alsologtostderr -v=1: (22.4120693s)
--- PASS: TestAddons/parallel/Ingress (66.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (34.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5sw9n" [6d2fd006-f087-478b-b49c-a0b12ebbcf40] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0196804s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-854100
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-854100: (28.1431401s)
--- PASS: TestAddons/parallel/InspektorGadget (34.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 8.596ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-rd2bf" [77e93a74-f9b2-4e7c-ad4b-538a18054ac8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0199086s
addons_test.go:417: (dbg) Run:  kubectl --context addons-854100 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable metrics-server --alsologtostderr -v=1: (15.7937658s)
--- PASS: TestAddons/parallel/MetricsServer (21.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 21.3371ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-k5mnt" [e6b581aa-a842-42e5-8e81-3b6098b04f02] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0222622s
addons_test.go:475: (dbg) Run:  kubectl --context addons-854100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-854100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.2418425s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable helm-tiller --alsologtostderr -v=1: (17.8213037s)
--- PASS: TestAddons/parallel/HelmTiller (30.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (83.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 10.7741ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-854100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-854100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9f56d33f-a3ee-42e5-9c81-e7a044f62a48] Pending
helpers_test.go:344: "task-pv-pod" [9f56d33f-a3ee-42e5-9c81-e7a044f62a48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9f56d33f-a3ee-42e5-9c81-e7a044f62a48] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0096717s
addons_test.go:586: (dbg) Run:  kubectl --context addons-854100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-854100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-854100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-854100 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-854100 delete pod task-pv-pod: (1.1214852s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-854100 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-854100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-854100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [32e9ef3a-96c6-4091-83d8-e86de2047c13] Pending
helpers_test.go:344: "task-pv-pod-restore" [32e9ef3a-96c6-4091-83d8-e86de2047c13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [32e9ef3a-96c6-4091-83d8-e86de2047c13] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0150297s
addons_test.go:628: (dbg) Run:  kubectl --context addons-854100 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-854100 delete pod task-pv-pod-restore: (1.4295548s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-854100 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-854100 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.2639744s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable volumesnapshots --alsologtostderr -v=1: (15.1369874s)
--- PASS: TestAddons/parallel/CSI (83.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-854100 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-854100 --alsologtostderr -v=1: (17.1139158s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-rdfqn" [73c4cafa-4e29-4cd1-af75-255aa286ddf4] Pending
helpers_test.go:344: "headlamp-68456f997b-rdfqn" [73c4cafa-4e29-4cd1-af75-255aa286ddf4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-rdfqn" [73c4cafa-4e29-4cd1-af75-255aa286ddf4] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 20.0199452s
--- PASS: TestAddons/parallel/Headlamp (37.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-8dtmk" [887a17ca-3772-4766-8eb1-efd00c93e1ce] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1544997s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-854100
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-854100: (16.5530833s)
--- PASS: TestAddons/parallel/CloudSpanner (21.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (88.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-854100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-854100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2232add3-221f-4443-980b-11d6eb7b355c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2232add3-221f-4443-980b-11d6eb7b355c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2232add3-221f-4443-980b-11d6eb7b355c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.013365s
addons_test.go:992: (dbg) Run:  kubectl --context addons-854100 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 ssh "cat /opt/local-path-provisioner/pvc-72f9c62a-a060-44b9-a171-fede391dec31_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 ssh "cat /opt/local-path-provisioner/pvc-72f9c62a-a060-44b9-a171-fede391dec31_default_test-pvc/file1": (10.5312047s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-854100 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-854100 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.6094517s)
--- PASS: TestAddons/parallel/LocalPath (88.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-52zc7" [fc58ec9d-8ebf-4c49-b94f-0149677e96c2] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0111097s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-854100
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-854100: (16.0937651s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-9866b" [8ea987db-041a-403b-96a1-bc9f91a99b67] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0122619s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (65.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 6.3474ms
addons_test.go:905: volcano-controller stabilized in 6.6941ms
addons_test.go:897: volcano-admission stabilized in 10.2171ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-jqhvw" [0cc3fd73-87ce-485e-871d-219cd85e57bd] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0134445s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-wsm6p" [4ce8e008-99b1-4450-93a0-d161c4472ad4] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.021891s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-6642n" [c1ef23a6-d2de-4aec-9fda-440871e76175] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0123693s
addons_test.go:924: (dbg) Run:  kubectl --context addons-854100 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-854100 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-854100 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7af9523d-0192-4135-8854-2d15a10b594a] Pending
helpers_test.go:344: "test-job-nginx-0" [7af9523d-0192-4135-8854-2d15a10b594a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7af9523d-0192-4135-8854-2d15a10b594a] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 24.0235347s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-854100 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-854100 addons disable volcano --alsologtostderr -v=1: (25.7491108s)
--- PASS: TestAddons/parallel/Volcano (65.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-854100 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-854100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-854100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-854100: (41.854691s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-854100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-854100: (5.4754281s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-854100
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-854100: (4.9476155s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-854100
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-854100: (2.3742809s)
--- PASS: TestAddons/StoppedEnableDisable (54.66s)

                                                
                                    
x
+
TestCertOptions (401.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-503000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-503000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (5m34.6781712s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-503000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-503000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.2059008s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-503000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-503000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-503000 -- "sudo cat /etc/kubernetes/admin.conf": (9.9931009s)
helpers_test.go:175: Cleaning up "cert-options-503000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-503000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-503000: (46.8477366s)
--- PASS: TestCertOptions (401.88s)

                                                
                                    
x
+
TestCertExpiration (1028.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-025200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-025200 --memory=2048 --cert-expiration=3m --driver=hyperv: (8m6.8547713s)
E0528 15:58:29.672472    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-025200 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0528 15:59:52.940925    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 16:00:06.458484    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-025200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m14.3204553s)
helpers_test.go:175: Cleaning up "cert-expiration-025200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-025200
E0528 16:05:06.455076    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-025200: (46.9925738s)
--- PASS: TestCertExpiration (1028.18s)

                                                
                                    
x
+
TestDockerFlags (577.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-856200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-856200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (8m29.8116908s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-856200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-856200 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.607164s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-856200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-856200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.0292496s)
helpers_test.go:175: Cleaning up "docker-flags-856200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-856200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-856200: (47.583178s)
--- PASS: TestDockerFlags (577.07s)

                                                
                                    
x
+
TestForceSystemdFlag (395.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-390500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-390500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m44.9488797s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-390500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-390500 ssh "docker info --format {{.CgroupDriver}}": (10.276917s)
helpers_test.go:175: Cleaning up "force-systemd-flag-390500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-390500
E0528 15:55:06.457644    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-390500: (40.1411833s)
--- PASS: TestForceSystemdFlag (395.37s)

                                                
                                    
x
+
TestForceSystemdEnv (511.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-693100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-693100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m35.7874388s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-693100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-693100 ssh "docker info --format {{.CgroupDriver}}": (10.0526128s)
helpers_test.go:175: Cleaning up "force-systemd-env-693100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-693100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-693100: (45.7564022s)
--- PASS: TestForceSystemdEnv (511.61s)

                                                
                                    
x
+
TestErrorSpam/start (17.52s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run: (5.8066701s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run: (5.8669337s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 start --dry-run: (5.8455367s)
--- PASS: TestErrorSpam/start (17.52s)

                                                
                                    
x
+
TestErrorSpam/status (37.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status: (12.856992s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status: (12.3671105s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 status: (12.3903238s)
--- PASS: TestErrorSpam/status (37.62s)

                                                
                                    
x
+
TestErrorSpam/pause (23.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause: (8.047157s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause: (7.4842704s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 pause: (7.6088324s)
--- PASS: TestErrorSpam/pause (23.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause
E0528 13:40:06.435019    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.450149    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.465995    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.497990    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.544944    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.638290    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:06.812197    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:07.143370    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:07.788028    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause: (7.8559778s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause
E0528 13:40:09.083463    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:11.650041    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause: (7.7839893s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause
E0528 13:40:16.774122    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 unpause: (7.8076569s)
--- PASS: TestErrorSpam/unpause (23.45s)

                                                
                                    
x
+
TestErrorSpam/stop (60.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop
E0528 13:40:27.017278    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:40:47.512766    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop: (38.8182542s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop: (10.9272661s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-999500 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-999500 stop: (10.5030115s)
--- PASS: TestErrorSpam/stop (60.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\1780\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (238.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-200200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0528 13:42:50.407066    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:45:06.424921    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 13:45:34.251732    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-200200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m58.8225305s)
--- PASS: TestFunctional/serial/StartWithProxy (238.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (130.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-200200 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-200200 --alsologtostderr -v=8: (2m10.0495122s)
functional_test.go:659: soft start took 2m10.0522907s for "functional-200200" cluster.
--- PASS: TestFunctional/serial/SoftStart (130.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-200200 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:3.1: (8.9820265s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:3.3: (8.6437786s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cache add registry.k8s.io/pause:latest: (8.6949888s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-200200 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4217223900\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-200200 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local4217223900\001: (2.3201553s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache add minikube-local-cache-test:functional-200200
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cache add minikube-local-cache-test:functional-200200: (8.4014589s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache delete minikube-local-cache-test:functional-200200
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-200200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl images: (9.4948093s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.5063377s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.5320216s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:48:46.343891    6264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cache reload: (8.1339342s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.6760825s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 kubectl -- --context functional-200200 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (128.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-200200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0528 13:50:06.430883    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-200200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m8.2216024s)
functional_test.go:757: restart took 2m8.2226621s for "functional-200200" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (128.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-200200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 logs: (8.5023242s)
--- PASS: TestFunctional/serial/LogsCmd (8.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1319754282\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1319754282\001\logs.txt: (10.8625319s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.87s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-200200 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-200200
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-200200: exit status 115 (16.8271834s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.18.211.78:32593 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:52:20.036581    8756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-200200 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-200200 delete -f testdata\invalidsvc.yaml: (1.2171163s)
--- PASS: TestFunctional/serial/InvalidService (21.44s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 status: (15.1085016s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.5382811s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 status -o json: (13.8451812s)
--- PASS: TestFunctional/parallel/StatusCmd (43.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-200200 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-200200 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-kr9cq" [d1de2c03-0323-4a17-959c-d35f7b30c72b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-kr9cq" [d1de2c03-0323-4a17-959c-d35f7b30c72b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.0231008s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 service hello-node-connect --url: (20.1169329s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.18.211.78:31591
functional_test.go:1671: http://172.18.211.78:31591: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-kr9cq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.18.211.78:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.18.211.78:31591
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (36.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8091a673-4937-4785-9a43-23a648f10e03] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0221475s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-200200 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-200200 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-200200 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-200200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00920717-7977-4ea4-9159-377d3410157b] Pending
helpers_test.go:344: "sp-pod" [00920717-7977-4ea4-9159-377d3410157b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00920717-7977-4ea4-9159-377d3410157b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0197231s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-200200 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-200200 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-200200 delete -f testdata/storage-provisioner/pod.yaml: (1.8083434s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-200200 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a14e29d-4b58-4ceb-a915-7e0ee03a2338] Pending
helpers_test.go:344: "sp-pod" [3a14e29d-4b58-4ceb-a915-7e0ee03a2338] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a14e29d-4b58-4ceb-a915-7e0ee03a2338] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.0078888s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-200200 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "echo hello": (10.10797s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "cat /etc/hostname": (10.0083117s)
--- PASS: TestFunctional/parallel/SSHCmd (20.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (54.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cp testdata\cp-test.txt /home/docker/cp-test.txt
E0528 13:55:06.434074    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.570409s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /home/docker/cp-test.txt": (10.0274744s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cp functional-200200:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd4152208111\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cp functional-200200:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd4152208111\001\cp-test.txt: (9.7628821s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /home/docker/cp-test.txt": (9.9414033s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.6377917s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh -n functional-200200 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.8547947s)
--- PASS: TestFunctional/parallel/CpCmd (54.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (64.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-200200 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bv96g" [4e953d05-3b09-4c70-b5e5-2668e235f379] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bv96g" [4e953d05-3b09-4c70-b5e5-2668e235f379] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.0114958s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;": exit status 1 (332.1915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;": exit status 1 (342.6257ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;": exit status 1 (330.1032ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;": exit status 1 (353.5725ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;": exit status 1 (350.6268ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-200200 exec mysql-64454c8b5c-bv96g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (64.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1780/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/test/nested/copy/1780/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/test/nested/copy/1780/hosts": (12.1924195s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1780.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/1780.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/1780.pem": (10.9802564s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1780.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /usr/share/ca-certificates/1780.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /usr/share/ca-certificates/1780.pem": (10.9946258s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.0368927s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/17802.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/17802.pem": (11.9445564s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /usr/share/ca-certificates/17802.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /usr/share/ca-certificates/17802.pem": (11.2371769s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.0011179s)
--- PASS: TestFunctional/parallel/CertSync (66.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-200200 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 ssh "sudo systemctl is-active crio": exit status 1 (12.6527486s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:52:39.506742    1372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (4.0087591s)
--- PASS: TestFunctional/parallel/License (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (49.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-200200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-200200"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-200200 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-200200": (32.6834709s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-200200 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-200200 docker-env | Invoke-Expression ; docker images": (16.5073126s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (49.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls --format short --alsologtostderr: (7.7911694s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-200200 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-200200
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-200200
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-200200 image ls --format short --alsologtostderr:
W0528 13:56:10.903586    7496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0528 13:56:10.910585    7496 out.go:291] Setting OutFile to fd 844 ...
I0528 13:56:10.911642    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:10.911642    7496 out.go:304] Setting ErrFile to fd 888...
I0528 13:56:10.911642    7496 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:10.935544    7496 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:10.935951    7496 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:10.936358    7496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:13.309133    7496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:13.309133    7496 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:13.320902    7496 ssh_runner.go:195] Run: systemctl --version
I0528 13:56:13.320902    7496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:15.639606    7496 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:15.639676    7496 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:15.639676    7496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
I0528 13:56:18.393064    7496 main.go:141] libmachine: [stdout =====>] : 172.18.211.78

                                                
                                                
I0528 13:56:18.394034    7496 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:18.394146    7496 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
I0528 13:56:18.510913    7496 ssh_runner.go:235] Completed: systemctl --version: (5.190002s)
I0528 13:56:18.521847    7496 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls --format table --alsologtostderr: (7.7968313s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-200200 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| docker.io/library/nginx                     | latest            | e784f4560448b | 188MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/google-containers/addon-resizer      | functional-200200 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-200200 | 44cb4ab8fe8c3 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-200200 image ls --format table --alsologtostderr:
W0528 13:56:20.259572    4744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0528 13:56:20.268117    4744 out.go:291] Setting OutFile to fd 708 ...
I0528 13:56:20.285639    4744 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:20.285639    4744 out.go:304] Setting ErrFile to fd 876...
I0528 13:56:20.285639    4744 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:20.302257    4744 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:20.303601    4744 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:20.303796    4744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:22.631203    4744 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:22.631203    4744 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:22.649808    4744 ssh_runner.go:195] Run: systemctl --version
I0528 13:56:22.649808    4744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:24.977723    4744 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:24.977877    4744 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:24.977877    4744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
I0528 13:56:27.754125    4744 main.go:141] libmachine: [stdout =====>] : 172.18.211.78

                                                
                                                
I0528 13:56:27.754901    4744 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:27.754995    4744 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
I0528 13:56:27.863836    4744 ssh_runner.go:235] Completed: systemctl --version: (5.2140192s)
I0528 13:56:27.875597    4744 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls --format json --alsologtostderr: (7.730929s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-200200 image ls --format json --alsologtostderr:
[{"id":"44cb4ab8fe8c355942fda71e84b23557ba10027d8c1a7bd6d1bcdffc43cda86c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-200200"],"size":"30"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b
133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mys
ql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-200200"],"size":"32900000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-200200 image ls --format json --alsologtostderr:
W0528 13:56:20.057038    2656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0528 13:56:20.064057    2656 out.go:291] Setting OutFile to fd 1016 ...
I0528 13:56:20.065038    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:20.065038    2656 out.go:304] Setting ErrFile to fd 664...
I0528 13:56:20.065038    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:20.082038    2656 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:20.082038    2656 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:20.083044    2656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:22.431430    2656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:22.431521    2656 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:22.446657    2656 ssh_runner.go:195] Run: systemctl --version
I0528 13:56:22.446657    2656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:24.797817    2656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:24.798046    2656 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:24.798288    2656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
I0528 13:56:27.504213    2656 main.go:141] libmachine: [stdout =====>] : 172.18.211.78

                                                
                                                
I0528 13:56:27.504726    2656 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:27.504726    2656 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
I0528 13:56:27.610754    2656 ssh_runner.go:235] Completed: systemctl --version: (5.164088s)
I0528 13:56:27.620326    2656 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls --format yaml --alsologtostderr: (7.7782811s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-200200 image ls --format yaml --alsologtostderr:
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 44cb4ab8fe8c355942fda71e84b23557ba10027d8c1a7bd6d1bcdffc43cda86c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-200200
size: "30"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-200200
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-200200 image ls --format yaml --alsologtostderr:
W0528 13:56:12.281895    6052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0528 13:56:12.289825    6052 out.go:291] Setting OutFile to fd 1008 ...
I0528 13:56:12.306966    6052 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:12.307032    6052 out.go:304] Setting ErrFile to fd 860...
I0528 13:56:12.307032    6052 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:12.322579    6052 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:12.323167    6052 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:12.323167    6052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:14.639928    6052 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:14.639928    6052 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:14.653355    6052 ssh_runner.go:195] Run: systemctl --version
I0528 13:56:14.653355    6052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:17.005926    6052 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:17.006725    6052 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:17.006725    6052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
I0528 13:56:19.776701    6052 main.go:141] libmachine: [stdout =====>] : 172.18.211.78

                                                
                                                
I0528 13:56:19.776758    6052 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:19.777074    6052 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
I0528 13:56:19.882763    6052 ssh_runner.go:235] Completed: systemctl --version: (5.2293577s)
I0528 13:56:19.894114    6052 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-200200 ssh pgrep buildkitd: exit status 1 (10.0551764s)

                                                
                                                
** stderr ** 
	W0528 13:56:18.695730    8468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image build -t localhost/my-image:functional-200200 testdata\build --alsologtostderr
E0528 13:56:29.623903    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image build -t localhost/my-image:functional-200200 testdata\build --alsologtostderr: (9.6547348s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-200200 image build -t localhost/my-image:functional-200200 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 751fb4bceced
---> Removed intermediate container 751fb4bceced
---> b8cff8421c1f
Step 3/3 : ADD content.txt /
---> 9edd24a7177f
Successfully built 9edd24a7177f
Successfully tagged localhost/my-image:functional-200200
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-200200 image build -t localhost/my-image:functional-200200 testdata\build --alsologtostderr:
W0528 13:56:28.745877    1904 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0528 13:56:28.753845    1904 out.go:291] Setting OutFile to fd 860 ...
I0528 13:56:28.772838    1904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:28.772838    1904 out.go:304] Setting ErrFile to fd 1020...
I0528 13:56:28.772838    1904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 13:56:28.791560    1904 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:28.807200    1904 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 13:56:28.808667    1904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:31.020418    1904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:31.020534    1904 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:31.036177    1904 ssh_runner.go:195] Run: systemctl --version
I0528 13:56:31.036177    1904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-200200 ).state
I0528 13:56:33.237028    1904 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0528 13:56:33.237028    1904 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:33.237172    1904 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-200200 ).networkadapters[0]).ipaddresses[0]
I0528 13:56:35.800840    1904 main.go:141] libmachine: [stdout =====>] : 172.18.211.78

                                                
                                                
I0528 13:56:35.801142    1904 main.go:141] libmachine: [stderr =====>] : 
I0528 13:56:35.801413    1904 sshutil.go:53] new ssh client: &{IP:172.18.211.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-200200\id_rsa Username:docker}
I0528 13:56:35.894788    1904 ssh_runner.go:235] Completed: systemctl --version: (4.8585363s)
I0528 13:56:35.894862    1904 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3644475346.tar
I0528 13:56:35.907409    1904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0528 13:56:35.939930    1904 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3644475346.tar
I0528 13:56:35.947903    1904 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3644475346.tar: stat -c "%s %y" /var/lib/minikube/build/build.3644475346.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3644475346.tar': No such file or directory
I0528 13:56:35.948837    1904 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3644475346.tar --> /var/lib/minikube/build/build.3644475346.tar (3072 bytes)
I0528 13:56:36.011363    1904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3644475346
I0528 13:56:36.044394    1904 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3644475346 -xf /var/lib/minikube/build/build.3644475346.tar
I0528 13:56:36.066182    1904 docker.go:360] Building image: /var/lib/minikube/build/build.3644475346
I0528 13:56:36.075870    1904 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-200200 /var/lib/minikube/build/build.3644475346
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0528 13:56:38.196805    1904 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-200200 /var/lib/minikube/build/build.3644475346: (2.1209313s)
I0528 13:56:38.209788    1904 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3644475346
I0528 13:56:38.249692    1904 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3644475346.tar
I0528 13:56:38.268738    1904 build_images.go:217] Built localhost/my-image:functional-200200 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3644475346.tar
I0528 13:56:38.268738    1904 build_images.go:133] succeeded building to: functional-200200
I0528 13:56:38.268738    1904 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (7.2874413s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3565594s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-200200
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 version -o=json --components: (8.3884157s)
--- PASS: TestFunctional/parallel/Version/components (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr: (17.5616692s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (8.4373817s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2: (2.525254s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2: (2.5326607s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 update-context --alsologtostderr -v=2: (2.56371s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.2018479s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (11.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.6792483s)
functional_test.go:1311: Took "11.679849s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "176.8679ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (11.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr: (13.0079485s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (8.968821s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.5062304s)
functional_test.go:1362: Took "11.5063066s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "170.7198ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13676: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 9776: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.9239113s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-200200
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image load --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr: (15.5322157s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (7.8851342s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (29.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-200200 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ef4920be-7c80-417b-8d4c-8c5828340e50] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ef4920be-7c80-417b-8d4c-8c5828340e50] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 29.1750786s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (29.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image save gcr.io/google-containers/addon-resizer:functional-200200 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image save gcr.io/google-containers/addon-resizer:functional-200200 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.5111807s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-200200 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7940: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-200200 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-200200 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-57cdb" [8eceafea-a672-4574-98f3-322e8d4a994e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-57cdb" [8eceafea-a672-4574-98f3-322e8d4a994e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0066271s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image rm gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image rm gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr: (7.526742s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (9.1803614s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 service list: (14.7316073s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.4253568s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image ls: (8.5506287s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 service list -o json: (14.4701009s)
functional_test.go:1490: Took "14.4708821s" to run "out/minikube-windows-amd64.exe -p functional-200200 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-200200
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-200200 image save --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-200200 image save --daemon gcr.io/google-containers/addon-resizer:functional-200200 --alsologtostderr: (10.5236595s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-200200
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-200200
--- PASS: TestFunctional/delete_addon-resizer_images (0.48s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-200200
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-200200
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (735.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-941700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0528 14:03:29.660788    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:29.676257    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:29.691323    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:29.723025    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:29.769404    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:29.864188    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:30.034552    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:30.365341    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:31.013631    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:32.302710    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:34.865765    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:39.991950    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:03:50.241616    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:04:10.728160    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:04:51.697247    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:05:06.437077    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 14:06:13.628737    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:08:29.653200    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:08:57.481772    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:10:06.429524    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-941700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m32.2731527s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr: (43.2052843s)
--- PASS: TestMultiControlPlane/serial/StartCluster (735.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-941700 -- rollout status deployment/busybox: (3.8733126s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- nslookup kubernetes.io: (1.7591418s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-qnh86 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-s2cwz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-941700 -- exec busybox-fc5497c4f-xbkdf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (260.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-941700 -v=7 --alsologtostderr
E0528 14:15:06.433635    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-941700 -v=7 --alsologtostderr: (3m29.5459217s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr
E0528 14:18:29.653743    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 status -v=7 --alsologtostderr: (50.8738717s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (260.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-941700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.7768096s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (648.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 status --output json -v=7 --alsologtostderr
E0528 14:19:52.858471    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 status --output json -v=7 --alsologtostderr: (50.2304803s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700:/home/docker/cp-test.txt
E0528 14:20:06.433198    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700:/home/docker/cp-test.txt: (9.8365039s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt": (9.8790165s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700.txt: (9.83364s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt": (9.8267091s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700_ha-941700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700_ha-941700-m02.txt: (17.201845s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt": (9.8979619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m02.txt": (9.7715166s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700_ha-941700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700_ha-941700-m03.txt: (17.1747229s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt": (9.8276441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m03.txt": (9.8923446s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700_ha-941700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700_ha-941700-m04.txt: (17.1415731s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test.txt": (9.9006137s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700_ha-941700-m04.txt": (9.8874598s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m02:/home/docker/cp-test.txt: (9.8943164s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt": (9.8964186s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m02.txt: (9.8339564s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt": (9.7713619s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m02_ha-941700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m02_ha-941700.txt: (17.2286962s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt"
E0528 14:23:29.661611    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt": (9.7834006s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700.txt": (9.7976675s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700-m02_ha-941700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700-m02_ha-941700-m03.txt: (16.9598706s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt": (9.7634859s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700-m03.txt": (9.8119397s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700-m02_ha-941700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m02:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700-m02_ha-941700-m04.txt: (17.3133596s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test.txt": (9.8510261s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700-m02_ha-941700-m04.txt": (9.7554401s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m03:/home/docker/cp-test.txt
E0528 14:25:06.434731    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m03:/home/docker/cp-test.txt: (9.8023666s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt": (9.8300422s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m03.txt: (9.8642518s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt": (9.7437336s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m03_ha-941700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m03_ha-941700.txt: (17.1543213s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt": (9.719795s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700.txt": (9.7891642s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt: (17.1301977s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt": (9.7462341s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700-m02.txt": (9.7570962s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m03:/home/docker/cp-test.txt ha-941700-m04:/home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt: (17.0671832s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test.txt": (9.9544314s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test_ha-941700-m03_ha-941700-m04.txt": (9.8659079s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp testdata\cp-test.txt ha-941700-m04:/home/docker/cp-test.txt: (9.8499687s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt": (9.6897572s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2463334176\001\cp-test_ha-941700-m04.txt: (9.6858602s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt": (9.6934052s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m04_ha-941700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700:/home/docker/cp-test_ha-941700-m04_ha-941700.txt: (17.260194s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt"
E0528 14:28:29.658900    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt": (9.7507587s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700.txt": (9.8781888s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700-m02:/home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt: (17.1500814s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt": (9.7151034s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m02 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700-m02.txt": (9.797038s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 cp ha-941700-m04:/home/docker/cp-test.txt ha-941700-m03:/home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt: (17.0368811s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m04 "sudo cat /home/docker/cp-test.txt": (9.8484179s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt"
E0528 14:29:49.649129    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-941700 ssh -n ha-941700-m03 "sudo cat /home/docker/cp-test_ha-941700-m04_ha-941700-m03.txt": (9.7599967s)
--- PASS: TestMultiControlPlane/serial/CopyFile (648.54s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (198.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-094100 --driver=hyperv
E0528 14:35:06.431471    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 14:36:32.868706    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-094100 --driver=hyperv: (3m18.2103141s)
--- PASS: TestImageBuild/serial/Setup (198.21s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-094100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-094100: (9.6662561s)
--- PASS: TestImageBuild/serial/NormalBuild (9.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-094100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-094100: (9.0532946s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-094100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-094100: (7.7228659s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-094100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-094100: (7.6971687s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (208.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-113300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0528 14:38:29.659308    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:40:06.441391    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-113300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m28.9132568s)
--- PASS: TestJSONOutput/start/Command (208.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.87s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-113300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-113300 --output=json --user=testUser: (7.8687827s)
--- PASS: TestJSONOutput/pause/Command (7.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-113300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-113300 --output=json --user=testUser: (7.735439s)
--- PASS: TestJSONOutput/unpause/Command (7.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-113300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-113300 --output=json --user=testUser: (39.4174209s)
--- PASS: TestJSONOutput/stop/Command (39.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-462300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-462300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (190.905ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc63bfd5-1a80-4fd5-b4bd-5beb39b6ddbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-462300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"98fab894-00c5-4b36-ac20-e5fd038b70d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"c5b9bf76-d80c-4e32-9b3c-2eb57e54bb9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4fa96377-e421-43dd-a488-91a63dd925b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"f0c9e635-5837-45f3-b372-426d3ec50bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"24753b26-efaf-4a6b-a9f0-0b9319e79b1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d212162a-289d-4e53-a112-58a553a8d663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 14:43:02.368018   13340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-462300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-462300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-462300: (1.078621s)
--- PASS: TestErrorJSONOutput (1.27s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (526.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-416900 --driver=hyperv
E0528 14:43:29.661749    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:45:06.446124    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-416900 --driver=hyperv: (3m17.6061032s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-416900 --driver=hyperv
E0528 14:46:29.665210    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 14:48:29.664004    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-416900 --driver=hyperv: (3m19.7452119s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-416900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.4770449s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-416900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0528 14:50:06.446372    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.5078353s)
helpers_test.go:175: Cleaning up "second-416900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-416900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-416900: (40.6577607s)
helpers_test.go:175: Cleaning up "first-416900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-416900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-416900: (49.1927209s)
--- PASS: TestMinikubeProfile (526.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (155.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-682100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0528 14:53:12.884809    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 14:53:29.657158    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-682100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m34.9544328s)
--- PASS: TestMountStart/serial/StartWithMountFirst (155.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-682100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-682100 ssh -- ls /minikube-host: (9.4997882s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (156.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-682100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0528 14:55:06.446933    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-682100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.440348s)
--- PASS: TestMountStart/serial/StartWithMountSecond (156.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host: (9.632024s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.63s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.8s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-682100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-682100 --alsologtostderr -v=5: (30.7955134s)
--- PASS: TestMountStart/serial/DeleteFirst (30.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host: (9.3078554s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (31.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-682100
E0528 14:58:29.670730    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-682100: (31.0902634s)
--- PASS: TestMountStart/serial/Stop (31.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (118.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-682100
E0528 15:00:06.449146    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-682100: (1m57.8343751s)
--- PASS: TestMountStart/serial/RestartStopped (118.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-682100 ssh -- ls /minikube-host: (9.5216169s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.52s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (425.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-517300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0528 15:03:09.668963    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 15:03:29.661379    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 15:05:06.448178    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-517300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m41.8473956s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr: (23.2389294s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (425.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- rollout status deployment/busybox: (3.3307442s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- nslookup kubernetes.io: (1.6996649s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-7bglt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-517300 -- exec busybox-fc5497c4f-r5pc4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (228.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-517300 -v 3 --alsologtostderr
E0528 15:09:52.900201    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 15:10:06.445998    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-517300 -v 3 --alsologtostderr: (3m11.9262188s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr: (36.4961342s)
--- PASS: TestMultiNode/serial/AddNode (228.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-517300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.977372s)
--- PASS: TestMultiNode/serial/ProfileList (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (369.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status --output json --alsologtostderr
E0528 15:13:29.665881    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 status --output json --alsologtostderr: (36.4137242s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300:/home/docker/cp-test.txt: (9.6230614s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt": (9.7644881s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300.txt: (10.031769s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt": (9.7958017s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt multinode-517300-m02:/home/docker/cp-test_multinode-517300_multinode-517300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt multinode-517300-m02:/home/docker/cp-test_multinode-517300_multinode-517300-m02.txt: (16.9519982s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt": (9.6730095s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test_multinode-517300_multinode-517300-m02.txt"
E0528 15:15:06.446048    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test_multinode-517300_multinode-517300-m02.txt": (9.5563836s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt multinode-517300-m03:/home/docker/cp-test_multinode-517300_multinode-517300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300:/home/docker/cp-test.txt multinode-517300-m03:/home/docker/cp-test_multinode-517300_multinode-517300-m03.txt: (17.1291392s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test.txt": (9.6574301s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test_multinode-517300_multinode-517300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test_multinode-517300_multinode-517300-m03.txt": (9.661952s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300-m02:/home/docker/cp-test.txt: (9.658937s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt": (9.6543093s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m02.txt: (9.6905958s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt": (9.7287709s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt multinode-517300:/home/docker/cp-test_multinode-517300-m02_multinode-517300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt multinode-517300:/home/docker/cp-test_multinode-517300-m02_multinode-517300.txt: (16.946414s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt": (9.6232054s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test_multinode-517300-m02_multinode-517300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test_multinode-517300-m02_multinode-517300.txt": (9.4627527s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt multinode-517300-m03:/home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m02:/home/docker/cp-test.txt multinode-517300-m03:/home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt: (16.6695396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test.txt": (9.6694647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test_multinode-517300-m02_multinode-517300-m03.txt": (9.6113521s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp testdata\cp-test.txt multinode-517300-m03:/home/docker/cp-test.txt: (9.6408885s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt": (9.6753586s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile279711523\001\cp-test_multinode-517300-m03.txt: (9.5677362s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt": (9.6454334s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt multinode-517300:/home/docker/cp-test_multinode-517300-m03_multinode-517300.txt
E0528 15:18:29.665768    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt multinode-517300:/home/docker/cp-test_multinode-517300-m03_multinode-517300.txt: (16.8946221s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt": (9.5877677s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test_multinode-517300-m03_multinode-517300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300 "sudo cat /home/docker/cp-test_multinode-517300-m03_multinode-517300.txt": (9.5659249s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt multinode-517300-m02:/home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 cp multinode-517300-m03:/home/docker/cp-test.txt multinode-517300-m02:/home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt: (16.7739054s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m03 "sudo cat /home/docker/cp-test.txt": (9.6722674s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 ssh -n multinode-517300-m02 "sudo cat /home/docker/cp-test_multinode-517300-m03_multinode-517300-m02.txt": (9.6436197s)
--- PASS: TestMultiNode/serial/CopyFile (369.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (76.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 node stop m03
E0528 15:19:49.683442    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 node stop m03: (24.2035069s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status
E0528 15:20:06.444895    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-517300 status: exit status 7 (26.6250376s)

                                                
                                                
-- stdout --
	multinode-517300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:19:52.705860    8916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-517300 status --alsologtostderr: exit status 7 (26.0374941s)

                                                
                                                
-- stdout --
	multinode-517300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:20:19.333476    1948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 15:20:19.341458    1948 out.go:291] Setting OutFile to fd 916 ...
	I0528 15:20:19.342472    1948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:20:19.342472    1948 out.go:304] Setting ErrFile to fd 904...
	I0528 15:20:19.342472    1948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 15:20:19.359457    1948 out.go:298] Setting JSON to false
	I0528 15:20:19.359457    1948 mustload.go:65] Loading cluster: multinode-517300
	I0528 15:20:19.359457    1948 notify.go:220] Checking for updates...
	I0528 15:20:19.360463    1948 config.go:182] Loaded profile config "multinode-517300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 15:20:19.360463    1948 status.go:255] checking status of multinode-517300 ...
	I0528 15:20:19.361474    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:20:21.621958    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:21.621958    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:21.621958    1948 status.go:330] multinode-517300 host status = "Running" (err=<nil>)
	I0528 15:20:21.621958    1948 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:20:21.622709    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:20:23.905160    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:23.905247    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:23.905308    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:20:26.510955    1948 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:20:26.510955    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:26.512041    1948 host.go:66] Checking if "multinode-517300" exists ...
	I0528 15:20:26.524564    1948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 15:20:26.524564    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300 ).state
	I0528 15:20:28.726022    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:28.726022    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:28.726022    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300 ).networkadapters[0]).ipaddresses[0]
	I0528 15:20:31.228287    1948 main.go:141] libmachine: [stdout =====>] : 172.18.214.123
	
	I0528 15:20:31.228287    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:31.240741    1948 sshutil.go:53] new ssh client: &{IP:172.18.214.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300\id_rsa Username:docker}
	I0528 15:20:31.341568    1948 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8169892s)
	I0528 15:20:31.355431    1948 ssh_runner.go:195] Run: systemctl --version
	I0528 15:20:31.378125    1948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:20:31.403096    1948 kubeconfig.go:125] found "multinode-517300" server: "https://172.18.214.123:8443"
	I0528 15:20:31.403207    1948 api_server.go:166] Checking apiserver status ...
	I0528 15:20:31.415592    1948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 15:20:31.456719    1948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1999/cgroup
	W0528 15:20:31.474240    1948 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1999/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 15:20:31.488193    1948 ssh_runner.go:195] Run: ls
	I0528 15:20:31.495469    1948 api_server.go:253] Checking apiserver healthz at https://172.18.214.123:8443/healthz ...
	I0528 15:20:31.501818    1948 api_server.go:279] https://172.18.214.123:8443/healthz returned 200:
	ok
	I0528 15:20:31.501818    1948 status.go:422] multinode-517300 apiserver status = Running (err=<nil>)
	I0528 15:20:31.502996    1948 status.go:257] multinode-517300 status: &{Name:multinode-517300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 15:20:31.503050    1948 status.go:255] checking status of multinode-517300-m02 ...
	I0528 15:20:31.503746    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:20:33.587310    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:33.587310    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:33.587310    1948 status.go:330] multinode-517300-m02 host status = "Running" (err=<nil>)
	I0528 15:20:33.587310    1948 host.go:66] Checking if "multinode-517300-m02" exists ...
	I0528 15:20:33.588322    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:20:35.751807    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:35.751807    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:35.763035    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:20:38.252878    1948 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:20:38.252878    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:38.264256    1948 host.go:66] Checking if "multinode-517300-m02" exists ...
	I0528 15:20:38.275296    1948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 15:20:38.275296    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m02 ).state
	I0528 15:20:40.386652    1948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0528 15:20:40.386652    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:40.386652    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-517300-m02 ).networkadapters[0]).ipaddresses[0]
	I0528 15:20:42.945629    1948 main.go:141] libmachine: [stdout =====>] : 172.18.223.89
	
	I0528 15:20:42.957153    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:42.957428    1948 sshutil.go:53] new ssh client: &{IP:172.18.223.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-517300-m02\id_rsa Username:docker}
	I0528 15:20:43.048510    1948 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7732001s)
	I0528 15:20:43.060049    1948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 15:20:43.087143    1948 status.go:257] multinode-517300-m02 status: &{Name:multinode-517300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0528 15:20:43.087212    1948 status.go:255] checking status of multinode-517300-m03 ...
	I0528 15:20:43.088225    1948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-517300-m03 ).state
	I0528 15:20:45.234209    1948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0528 15:20:45.234436    1948 main.go:141] libmachine: [stderr =====>] : 
	I0528 15:20:45.234496    1948 status.go:330] multinode-517300-m03 host status = "Stopped" (err=<nil>)
	I0528 15:20:45.234496    1948 status.go:343] host is not running, skipping remaining checks
	I0528 15:20:45.234496    1948 status.go:257] multinode-517300-m03 status: &{Name:multinode-517300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (76.87s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (183.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 node start m03 -v=7 --alsologtostderr: (2m28.0972971s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-517300 status -v=7 --alsologtostderr
E0528 15:23:29.672258    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-517300 status -v=7 --alsologtostderr: (35.1668286s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (183.56s)

                                                
                                    
x
+
TestPreload (547s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-423800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0528 15:35:06.445082    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
E0528 15:36:29.701894    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-423800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m19.2831404s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-423800 image pull gcr.io/k8s-minikube/busybox
E0528 15:38:29.670180    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-423800 image pull gcr.io/k8s-minikube/busybox: (8.1084384s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-423800
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-423800: (39.0764185s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-423800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0528 15:40:06.443489    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-423800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (3m11.975547s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-423800 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-423800 image list: (7.1879767s)
helpers_test.go:175: Cleaning up "test-preload-423800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-423800
E0528 15:43:12.925506    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-423800: (41.3469118s)
--- PASS: TestPreload (547.00s)

                                                
                                    
x
+
TestScheduledStopWindows (331.09s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-226600 --memory=2048 --driver=hyperv
E0528 15:43:29.667209    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
E0528 15:45:06.455179    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-854100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-226600 --memory=2048 --driver=hyperv: (3m17.8528251s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-226600 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-226600 --schedule 5m: (10.8999045s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-226600 -n scheduled-stop-226600
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-226600 -n scheduled-stop-226600: exit status 1 (10.0237802s)

                                                
                                                
** stderr ** 
	W0528 15:46:42.686881    7808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-226600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-226600 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5671699s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-226600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-226600 --schedule 5s: (10.7117868s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-226600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-226600: exit status 7 (2.3507118s)

                                                
                                                
-- stdout --
	scheduled-stop-226600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:48:13.000144   14948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-226600 -n scheduled-stop-226600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-226600 -n scheduled-stop-226600: exit status 7 (2.3375356s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:48:15.358187    9424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-226600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-226600
E0528 15:48:29.680238    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-226600: (27.3269402s)
--- PASS: TestScheduledStopWindows (331.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1025.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2027529969.exe start -p running-upgrade-205300 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.2027529969.exe start -p running-upgrade-205300 --memory=2200 --vm-driver=hyperv: (7m7.0895559s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-205300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0528 16:08:29.672968    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-205300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m52.0874365s)
helpers_test.go:175: Cleaning up "running-upgrade-205300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-205300
E0528 16:16:32.956017    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-205300: (1m5.6073322s)
--- PASS: TestRunningBinaryUpgrade (1025.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-390500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-390500 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (294.3791ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-390500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 15:48:45.043174    4164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (856.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1668114496.exe start -p stopped-upgrade-347000 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1668114496.exe start -p stopped-upgrade-347000 --memory=2200 --vm-driver=hyperv: (7m8.1417409s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1668114496.exe -p stopped-upgrade-347000 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.1668114496.exe -p stopped-upgrade-347000 stop: (36.3499613s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-347000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0528 16:13:29.672842    1780 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-200200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-347000 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m31.5489196s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (856.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-347000
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-347000: (10.1389312s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.14s)

                                                
                                    

Test skip (30/200)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-200200 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-200200 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 11300: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-200200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-200200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0368657s)

                                                
                                                
-- stdout --
	* [functional-200200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:54:35.852387    2844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 13:54:35.853388    2844 out.go:291] Setting OutFile to fd 688 ...
	I0528 13:54:35.854384    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:54:35.854384    2844 out.go:304] Setting ErrFile to fd 892...
	I0528 13:54:35.854384    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:54:35.878387    2844 out.go:298] Setting JSON to false
	I0528 13:54:35.882848    2844 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2440,"bootTime":1716927235,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:54:35.882848    2844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:54:35.886383    2844 out.go:177] * [functional-200200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 13:54:35.889388    2844 notify.go:220] Checking for updates...
	I0528 13:54:35.892630    2844 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:54:35.895173    2844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 13:54:35.897348    2844 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:54:35.900891    2844 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 13:54:35.903749    2844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 13:54:35.906751    2844 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:54:35.908049    2844 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-200200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-200200 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0706061s)

                                                
                                                
-- stdout --
	* [functional-200200] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0528 13:54:16.792807   14668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube1\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0528 13:54:16.794125   14668 out.go:291] Setting OutFile to fd 1008 ...
	I0528 13:54:16.795085   14668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:54:16.795085   14668 out.go:304] Setting ErrFile to fd 904...
	I0528 13:54:16.795085   14668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 13:54:16.822792   14668 out.go:298] Setting JSON to false
	I0528 13:54:16.825795   14668 start.go:129] hostinfo: {"hostname":"minikube1","uptime":2421,"bootTime":1716927235,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4412 Build 19045.4412","kernelVersion":"10.0.19045.4412 Build 19045.4412","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0528 13:54:16.825795   14668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0528 13:54:16.829797   14668 out.go:177] * [functional-200200] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4412 Build 19045.4412
	I0528 13:54:16.832796   14668 notify.go:220] Checking for updates...
	I0528 13:54:16.834807   14668 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0528 13:54:16.837796   14668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 13:54:16.839797   14668 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0528 13:54:16.842808   14668 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 13:54:16.844798   14668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 13:54:16.848798   14668 config.go:182] Loaded profile config "functional-200200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 13:54:16.849793   14668 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard